pandas: BUG: `std` using `numpy.float32` dtype gives incorrect result on contant array.

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

import pandas as pd
import numpy as np

# this should be 0 or very close to 0
print(pd.Series([271.46] * 150000, dtype='float32').std())
# returns: 0.229433074593544

# same result using numpy dtype directly (as you would expect)
print(pd.Series([271.46] * 150000, dtype=np.float32).std())
# returns: 0.229433074593544


# the pandas float32 dtype does not have this issue.
print(pd.Series([271.46] * 150000, dtype=pd.Float32Dtype()).std())
# returns: 0.0

# neither does using the numpy standard deviation function on the numpy values
print(np.std(pd.Series([271.46] * 150000, dtype=np.float32).values))
# returns: 0.0

Issue Description

When using the numpy float32 dtype by passing either the string 'float32' or the numpy dtype np.float32, the value of the standard deviation is incorrect for a constant array (array of identical values). It should return zero, but instead returns a value.

Switching to using the Pandas float32 dtype alleviates the error, as does using np.std on the numpy values of the series.

Expected Behavior

The expected behavior is the following evaluating to True

pd.Series([271.46] * 150000, dtype='float32').std() == 0.0

Installed Versions

INSTALLED VERSIONS
------------------
commit                : f538741432edf55c6b9fb5d0d496d2dd1d7c2457
python                : 3.11.7.final.0
python-bits           : 64
OS                    : Windows
OS-release            : 10
Version               : 10.0.22621
machine               : AMD64
processor             : Intel64 Family 6 Model 151 Stepping 5, GenuineIntel
byteorder             : little
LC_ALL                : None
LANG                  : None
LOCALE                : English_United States.1252

pandas                : 2.2.0
numpy                 : 1.26.3
pytz                  : 2023.3.post1
dateutil              : 2.8.2
setuptools            : 68.2.2
pip                   : 23.3.1
Cython                : None
pytest                : None
hypothesis            : None
sphinx                : None
blosc                 : None
feather               : None
xlsxwriter            : None
lxml.etree            : 4.9.3
html5lib              : None
pymysql               : None
psycopg2              : None
jinja2                : None
IPython               : 8.15.0
pandas_datareader     : None
adbc-driver-postgresql: None
adbc-driver-sqlite    : None
bs4                   : None
bottleneck            : 1.3.7
dataframe-api-compat  : None
fastparquet           : None
fsspec                : None
gcsfs                 : None
matplotlib            : 3.8.0
numba                 : None
numexpr               : 2.8.7
odfpy                 : None
openpyxl              : None
pandas_gbq            : None
pyarrow               : None
pyreadstat            : None
python-calamine       : None
pyxlsb                : None
s3fs                  : None
scipy                 : 1.11.4
sqlalchemy            : None
tables                : None
tabulate              : None
xarray                : None
xlrd                  : None
zstandard             : None
tzdata                : 2023.3
qtpy                  : None
pyqt5                 : None

About this issue

  • Original URL
  • State: closed
  • Created 4 months ago
  • Comments: 16 (6 by maintainers)

Most upvoted comments

that means that it is most likely a bottleneck issue, could you report there?

Ok, that is interesting. The pip installed version gives the same outputs that as you are seeing:

image

The outputs I am originally seeing are from the conda version of the packages.