title
stringlengths 1
185
| diff
stringlengths 0
32.2M
| body
stringlengths 0
123k
⌀ | url
stringlengths 57
58
| created_at
stringlengths 20
20
| closed_at
stringlengths 20
20
| merged_at
stringlengths 20
20
⌀ | updated_at
stringlengths 20
20
|
---|---|---|---|---|---|---|---|
DOC: various doc build fixes | diff --git a/doc/source/advanced.rst b/doc/source/advanced.rst
index e50e792201d26..0c843dd39b56f 100644
--- a/doc/source/advanced.rst
+++ b/doc/source/advanced.rst
@@ -729,7 +729,7 @@ Int64Index and RangeIndex
Prior to 0.18.0, the ``Int64Index`` would provide the default index for all ``NDFrame`` objects.
``RangeIndex`` is a sub-class of ``Int64Index`` added in version 0.18.0, now providing the default index for all ``NDFrame`` objects.
-``RangeIndex`` is an optimized version of ``Int64Index`` that can represent a monotonic ordered set. These are analagous to python :ref:`range types <https://docs.python.org/3/library/stdtypes.html#typesseq-range>`.
+``RangeIndex`` is an optimized version of ``Int64Index`` that can represent a monotonic ordered set. These are analagous to python `range types <https://docs.python.org/3/library/stdtypes.html#typesseq-range>`__.
.. _indexing.float64index:
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 6ceeee4ad6afb..99126527759f6 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -292,7 +292,7 @@
'matplotlib': ('http://matplotlib.org/', None),
'python': ('http://docs.python.org/3', None),
'numpy': ('http://docs.scipy.org/doc/numpy', None),
- 'scipy': ('http://docs.scipy.org/doc/scipy', None),
+ 'scipy': ('http://docs.scipy.org/doc/scipy/reference', None),
'py': ('http://pylib.readthedocs.org/en/latest/', None)
}
import glob
diff --git a/doc/source/io.rst b/doc/source/io.rst
index b011072d8c3fb..e9bd029b30537 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -4789,7 +4789,7 @@ Reading
Space on disk (in bytes)
-.. code-block::
+.. code-block:: none
25843712 Apr 8 14:11 test.sql
24007368 Apr 8 14:11 test_fixed.hdf
diff --git a/doc/source/options.rst b/doc/source/options.rst
index d761d827006be..25f03df4040a3 100644
--- a/doc/source/options.rst
+++ b/doc/source/options.rst
@@ -71,6 +71,7 @@ with no argument ``describe_option`` will print out the descriptions for all ava
.. ipython:: python
:suppress:
+ :okwarning:
pd.reset_option("all")
diff --git a/doc/source/r_interface.rst b/doc/source/r_interface.rst
index efe403c85f330..7e72231c21b15 100644
--- a/doc/source/r_interface.rst
+++ b/doc/source/r_interface.rst
@@ -17,7 +17,7 @@ rpy2 / R interface
In v0.16.0, the ``pandas.rpy`` interface has been **deprecated and will be
removed in a future version**. Similar functionality can be accessed
- through the `rpy2 <http://rpy2.readthedocs.io/>`_ project.
+ through the `rpy2 <http://rpy2.readthedocs.io/>`__ project.
See the :ref:`updating <rpy.updating>` section for a guide to port your
code from the ``pandas.rpy`` to ``rpy2`` functions.
@@ -73,7 +73,7 @@ The ``convert_to_r_matrix`` function can be replaced by the normal
comparison to the ones in pandas, please report this at the
`issue tracker <https://github.com/pydata/pandas/issues>`_.
-See also the documentation of the `rpy2 <http://rpy.sourceforge.net/>`_ project.
+See also the documentation of the `rpy2 <http://rpy.sourceforge.net/>`__ project.
R interface with rpy2
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt
index 93c76bc80684f..7418cd0e6baa3 100644
--- a/doc/source/whatsnew/v0.18.0.txt
+++ b/doc/source/whatsnew/v0.18.0.txt
@@ -764,7 +764,7 @@ yields a ``Resampler``.
r
Downsampling
-''''''''''''
+""""""""""""
You can then use this object to perform operations.
These are downsampling operations (going from a higher frequency to a lower one).
@@ -796,7 +796,7 @@ These accessors can of course, be combined
r[['A','B']].agg(['mean','sum'])
Upsampling
-''''''''''
+""""""""""
.. currentmodule:: pandas.tseries.resample
@@ -842,7 +842,7 @@ New API
In the new API, you can either downsample OR upsample. The prior implementation would allow you to pass an aggregator function (like ``mean``) even though you were upsampling, providing a bit of confusion.
Previous API will work but with deprecations
-''''''''''''''''''''''''''''''''''''''''''''
+""""""""""""""""""""""""""""""""""""""""""""
.. warning::
diff --git a/doc/source/whatsnew/v0.18.1.txt b/doc/source/whatsnew/v0.18.1.txt
index 51982c42499ff..ba14ac51012c7 100644
--- a/doc/source/whatsnew/v0.18.1.txt
+++ b/doc/source/whatsnew/v0.18.1.txt
@@ -374,7 +374,7 @@ New Behavior:
df.groupby('c', sort=False).nth(1)
-.. _whatsnew_0181.numpy_compatibility
+.. _whatsnew_0181.numpy_compatibility:
numpy function compatibility
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index cc639b562dab8..f6915e962c049 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -1197,7 +1197,8 @@ def nth(self, n, dropna=None):
1 4
5 6
- # NaNs denote group exhausted when using dropna
+ NaNs denote group exhausted when using dropna
+
>>> g.nth(1, dropna='any')
B
A
| Up to a clean doc build :-)
| https://api.github.com/repos/pandas-dev/pandas/pulls/13502 | 2016-06-23T15:30:18Z | 2016-06-24T12:04:12Z | 2016-06-24T12:04:12Z | 2016-06-24T12:04:18Z |
DOC: fix accessor docs for sphinx > 1.3 (GH12161) | diff --git a/doc/source/conf.py b/doc/source/conf.py
index 87510d13ee484..6ceeee4ad6afb 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -318,6 +318,7 @@
# Add custom Documenter to handle attributes/methods of an AccessorProperty
# eg pandas.Series.str and pandas.Series.dt (see GH9322)
+import sphinx
from sphinx.util import rpartition
from sphinx.ext.autodoc import Documenter, MethodDocumenter, AttributeDocumenter
from sphinx.ext.autosummary import Autosummary
@@ -365,7 +366,10 @@ def resolve_name(self, modname, parents, path, base):
if not modname:
modname = self.env.temp_data.get('autodoc:module')
if not modname:
- modname = self.env.temp_data.get('py:module')
+ if sphinx.__version__ > '1.3':
+ modname = self.env.ref_context.get('py:module')
+ else:
+ modname = self.env.temp_data.get('py:module')
# ... else, it stays None, which means invalid
return modname, parents + [base]
| closes #12161
With this change, the links in the api summary to the accessor methods should work again.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13499 | 2016-06-23T12:38:50Z | 2016-06-23T15:37:45Z | 2016-06-23T15:37:45Z | 2016-06-23T15:37:45Z |
TST: Clean up tests of DataFrame.sort_{index,values} | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index b4b35953b4282..15d4bb6b5a537 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -68,8 +68,12 @@
# ---------------------------------------------------------------------
# Docstring templates
-_shared_doc_kwargs = dict(axes='index, columns', klass='DataFrame',
- axes_single_arg="{0, 1, 'index', 'columns'}")
+_shared_doc_kwargs = dict(
+ axes='index, columns', klass='DataFrame',
+ axes_single_arg="{0, 1, 'index', 'columns'}",
+ optional_by="""
+ by : str or list of str
+ Name or list of names which refer to the axis items.""")
_numeric_only_doc = """numeric_only : boolean, default None
Include only float, int, boolean data. If None, will attempt to use
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 348281d1a7e30..0512afa402692 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -37,10 +37,13 @@
# goal is to be able to define the docs close to function, while still being
# able to share
_shared_docs = dict()
-_shared_doc_kwargs = dict(axes='keywords for axes', klass='NDFrame',
- axes_single_arg='int or labels for object',
- args_transpose='axes to permute (int or label for'
- ' object)')
+_shared_doc_kwargs = dict(
+ axes='keywords for axes', klass='NDFrame',
+ axes_single_arg='int or labels for object',
+ args_transpose='axes to permute (int or label for object)',
+ optional_by="""
+ by : str or list of str
+ Name or list of names which refer to the axis items.""")
def is_dictlike(x):
@@ -1956,21 +1959,20 @@ def add_suffix(self, suffix):
.. versionadded:: 0.17.0
Parameters
- ----------
- by : string name or list of names which refer to the axis items
- axis : %(axes)s to direct sorting
- ascending : bool or list of bool
+ ----------%(optional_by)s
+ axis : %(axes)s to direct sorting, default 0
+ ascending : bool or list of bool, default True
Sort ascending vs. descending. Specify list for multiple sort
orders. If this is a list of bools, must match the length of
the by.
- inplace : bool
+ inplace : bool, default False
if True, perform operation in-place
- kind : {`quicksort`, `mergesort`, `heapsort`}
+ kind : {'quicksort', 'mergesort', 'heapsort'}, default 'quicksort'
Choice of sorting algorithm. See also ndarray.np.sort for more
information. `mergesort` is the only stable algorithm. For
DataFrames, this option is only applied when sorting on a single
column or label.
- na_position : {'first', 'last'}
+ na_position : {'first', 'last'}, default 'last'
`first` puts NaNs at the beginning, `last` puts NaNs at the end
Returns
@@ -1992,16 +1994,16 @@ def sort_values(self, by, axis=0, ascending=True, inplace=False,
if not None, sort on values in specified index level(s)
ascending : boolean, default True
Sort ascending vs. descending
- inplace : bool
+ inplace : bool, default False
if True, perform operation in-place
- kind : {`quicksort`, `mergesort`, `heapsort`}
+ kind : {'quicksort', 'mergesort', 'heapsort'}, default 'quicksort'
Choice of sorting algorithm. See also ndarray.np.sort for more
information. `mergesort` is the only stable algorithm. For
DataFrames, this option is only applied when sorting on a single
column or label.
- na_position : {'first', 'last'}
+ na_position : {'first', 'last'}, default 'last'
`first` puts NaNs at the beginning, `last` puts NaNs at the end
- sort_remaining : bool
+ sort_remaining : bool, default True
if true and sorting by level and index is multilevel, sort by other
levels too (in order) after sorting by specified level
diff --git a/pandas/core/series.py b/pandas/core/series.py
index cf1639bacc3be..6f3190c288e94 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -62,7 +62,8 @@
axes='index', klass='Series', axes_single_arg="{0, 'index'}",
inplace="""inplace : boolean, default False
If True, performs operation inplace and returns None.""",
- duplicated='Series')
+ duplicated='Series',
+ optional_by='')
def _coerce_method(converter):
diff --git a/pandas/tests/frame/test_sorting.py b/pandas/tests/frame/test_sorting.py
index ff2159f8b6f40..4d57216c8f870 100644
--- a/pandas/tests/frame/test_sorting.py
+++ b/pandas/tests/frame/test_sorting.py
@@ -21,75 +21,68 @@ class TestDataFrameSorting(tm.TestCase, TestData):
_multiprocess_can_split_ = True
- def test_sort_values(self):
- # API for 9816
+ def test_sort_index(self):
+ # GH13496
- # sort_index
frame = DataFrame(np.arange(16).reshape(4, 4), index=[1, 2, 3, 4],
columns=['A', 'B', 'C', 'D'])
- # 9816 deprecated
- with tm.assert_produces_warning(FutureWarning):
- frame.sort(columns='A')
- with tm.assert_produces_warning(FutureWarning):
- frame.sort()
-
+ # axis=0 : sort rows by index labels
unordered = frame.ix[[3, 2, 4, 1]]
- expected = unordered.sort_index()
-
result = unordered.sort_index(axis=0)
+ expected = frame
assert_frame_equal(result, expected)
- unordered = frame.ix[:, [2, 1, 3, 0]]
- expected = unordered.sort_index(axis=1)
+ result = unordered.sort_index(ascending=False)
+ expected = frame[::-1]
+ assert_frame_equal(result, expected)
+ # axis=1 : sort columns by column names
+ unordered = frame.ix[:, [2, 1, 3, 0]]
result = unordered.sort_index(axis=1)
- assert_frame_equal(result, expected)
+ assert_frame_equal(result, frame)
+
+ result = unordered.sort_index(axis=1, ascending=False)
+ expected = frame.ix[:, ::-1]
assert_frame_equal(result, expected)
- # sortlevel
- mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC'))
+ def test_sort_index_multiindex(self):
+ # GH13496
+
+ # sort rows by specified level of multi-index
+ mi = MultiIndex.from_tuples([[2, 1, 3], [1, 1, 1]], names=list('ABC'))
df = DataFrame([[1, 2], [3, 4]], mi)
result = df.sort_index(level='A', sort_remaining=False)
expected = df.sortlevel('A', sort_remaining=False)
assert_frame_equal(result, expected)
+ # sort columns by specified level of multi-index
df = df.T
result = df.sort_index(level='A', axis=1, sort_remaining=False)
expected = df.sortlevel('A', axis=1, sort_remaining=False)
assert_frame_equal(result, expected)
- # MI sort, but no by
+ # MI sort, but no level: sort_level has no effect
mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC'))
df = DataFrame([[1, 2], [3, 4]], mi)
result = df.sort_index(sort_remaining=False)
expected = df.sort_index()
assert_frame_equal(result, expected)
- def test_sort_index(self):
+ def test_sort(self):
frame = DataFrame(np.arange(16).reshape(4, 4), index=[1, 2, 3, 4],
columns=['A', 'B', 'C', 'D'])
- # axis=0
- unordered = frame.ix[[3, 2, 4, 1]]
- sorted_df = unordered.sort_index(axis=0)
- expected = frame
- assert_frame_equal(sorted_df, expected)
-
- sorted_df = unordered.sort_index(ascending=False)
- expected = frame[::-1]
- assert_frame_equal(sorted_df, expected)
-
- # axis=1
- unordered = frame.ix[:, ['D', 'B', 'C', 'A']]
- sorted_df = unordered.sort_index(axis=1)
- expected = frame
- assert_frame_equal(sorted_df, expected)
+ # 9816 deprecated
+ with tm.assert_produces_warning(FutureWarning):
+ frame.sort(columns='A')
+ with tm.assert_produces_warning(FutureWarning):
+ frame.sort()
- sorted_df = unordered.sort_index(axis=1, ascending=False)
- expected = frame.ix[:, ::-1]
- assert_frame_equal(sorted_df, expected)
+ def test_sort_values(self):
+ frame = DataFrame([[1, 1, 2], [3, 1, 0], [4, 5, 6]],
+ index=[1, 2, 3], columns=list('ABC'))
# by column
sorted_df = frame.sort_values(by='A')
@@ -109,16 +102,17 @@ def test_sort_index(self):
sorted_df = frame.sort_values(by=['A'], ascending=[False])
assert_frame_equal(sorted_df, expected)
- # check for now
- sorted_df = frame.sort_values(by='A')
- assert_frame_equal(sorted_df, expected[::-1])
- expected = frame.sort_values(by='A')
+ # multiple bys
+ sorted_df = frame.sort_values(by=['B', 'C'])
+ expected = frame.loc[[2, 1, 3]]
assert_frame_equal(sorted_df, expected)
- expected = frame.sort_values(by=['A', 'B'], ascending=False)
- sorted_df = frame.sort_values(by=['A', 'B'])
+ sorted_df = frame.sort_values(by=['B', 'C'], ascending=False)
assert_frame_equal(sorted_df, expected[::-1])
+ sorted_df = frame.sort_values(by=['B', 'A'], ascending=[True, False])
+ assert_frame_equal(sorted_df, expected)
+
self.assertRaises(ValueError, lambda: frame.sort_values(
by=['A', 'B'], axis=2, inplace=True))
@@ -130,6 +124,25 @@ def test_sort_index(self):
with assertRaisesRegexp(ValueError, msg):
frame.sort_values(by=['A', 'B'], axis=0, ascending=[True] * 5)
+ def test_sort_values_inplace(self):
+ frame = DataFrame(np.random.randn(4, 4), index=[1, 2, 3, 4],
+ columns=['A', 'B', 'C', 'D'])
+
+ sorted_df = frame.copy()
+ sorted_df.sort_values(by='A', inplace=True)
+ expected = frame.sort_values(by='A')
+ assert_frame_equal(sorted_df, expected)
+
+ sorted_df = frame.copy()
+ sorted_df.sort_values(by='A', ascending=False, inplace=True)
+ expected = frame.sort_values(by='A', ascending=False)
+ assert_frame_equal(sorted_df, expected)
+
+ sorted_df = frame.copy()
+ sorted_df.sort_values(by=['A', 'B'], ascending=False, inplace=True)
+ expected = frame.sort_values(by=['A', 'B'], ascending=False)
+ assert_frame_equal(sorted_df, expected)
+
def test_sort_index_categorical_index(self):
df = (DataFrame({'A': np.arange(6, dtype='int64'),
@@ -361,25 +374,6 @@ def test_sort_index_different_sortorder(self):
result = idf['C'].sort_index(ascending=[1, 0])
assert_series_equal(result, expected['C'])
- def test_sort_inplace(self):
- frame = DataFrame(np.random.randn(4, 4), index=[1, 2, 3, 4],
- columns=['A', 'B', 'C', 'D'])
-
- sorted_df = frame.copy()
- sorted_df.sort_values(by='A', inplace=True)
- expected = frame.sort_values(by='A')
- assert_frame_equal(sorted_df, expected)
-
- sorted_df = frame.copy()
- sorted_df.sort_values(by='A', ascending=False, inplace=True)
- expected = frame.sort_values(by='A', ascending=False)
- assert_frame_equal(sorted_df, expected)
-
- sorted_df = frame.copy()
- sorted_df.sort_values(by=['A', 'B'], ascending=False, inplace=True)
- expected = frame.sort_values(by=['A', 'B'], ascending=False)
- assert_frame_equal(sorted_df, expected)
-
def test_sort_index_duplicates(self):
# with 9816, these are all translated to .sort_values
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index 433f0f4bc67f5..4b2e5d251f7eb 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -5,7 +5,6 @@
from distutils.version import LooseVersion
import nose
-import random
from numpy import nan
import numpy as np
@@ -1414,141 +1413,6 @@ def test_is_monotonic(self):
self.assertFalse(s.is_monotonic)
self.assertTrue(s.is_monotonic_decreasing)
- def test_sort_values(self):
-
- ts = self.ts.copy()
-
- # 9816 deprecated
- with tm.assert_produces_warning(FutureWarning):
- ts.sort()
-
- self.assert_series_equal(ts, self.ts.sort_values())
- self.assert_index_equal(ts.index, self.ts.sort_values().index)
-
- ts.sort_values(ascending=False, inplace=True)
- self.assert_series_equal(ts, self.ts.sort_values(ascending=False))
- self.assert_index_equal(ts.index,
- self.ts.sort_values(ascending=False).index)
-
- # GH 5856/5853
- # Series.sort_values operating on a view
- df = DataFrame(np.random.randn(10, 4))
- s = df.iloc[:, 0]
-
- def f():
- s.sort_values(inplace=True)
-
- self.assertRaises(ValueError, f)
-
- # test order/sort inplace
- # GH6859
- ts1 = self.ts.copy()
- ts1.sort_values(ascending=False, inplace=True)
- ts2 = self.ts.copy()
- ts2.sort_values(ascending=False, inplace=True)
- assert_series_equal(ts1, ts2)
-
- ts1 = self.ts.copy()
- ts1 = ts1.sort_values(ascending=False, inplace=False)
- ts2 = self.ts.copy()
- ts2 = ts.sort_values(ascending=False)
- assert_series_equal(ts1, ts2)
-
- def test_sort_index(self):
- rindex = list(self.ts.index)
- random.shuffle(rindex)
-
- random_order = self.ts.reindex(rindex)
- sorted_series = random_order.sort_index()
- assert_series_equal(sorted_series, self.ts)
-
- # descending
- sorted_series = random_order.sort_index(ascending=False)
- assert_series_equal(sorted_series,
- self.ts.reindex(self.ts.index[::-1]))
-
- def test_sort_index_inplace(self):
-
- # For #11402
- rindex = list(self.ts.index)
- random.shuffle(rindex)
-
- # descending
- random_order = self.ts.reindex(rindex)
- result = random_order.sort_index(ascending=False, inplace=True)
- self.assertIs(result, None,
- msg='sort_index() inplace should return None')
- assert_series_equal(random_order, self.ts.reindex(self.ts.index[::-1]))
-
- # ascending
- random_order = self.ts.reindex(rindex)
- result = random_order.sort_index(ascending=True, inplace=True)
- self.assertIs(result, None,
- msg='sort_index() inplace should return None')
- assert_series_equal(random_order, self.ts)
-
- def test_sort_API(self):
-
- # API for 9816
-
- # sortlevel
- mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC'))
- s = Series([1, 2], mi)
- backwards = s.iloc[[1, 0]]
-
- res = s.sort_index(level='A')
- assert_series_equal(backwards, res)
-
- # sort_index
- rindex = list(self.ts.index)
- random.shuffle(rindex)
-
- random_order = self.ts.reindex(rindex)
- sorted_series = random_order.sort_index(level=0)
- assert_series_equal(sorted_series, self.ts)
-
- # compat on axis
- sorted_series = random_order.sort_index(axis=0)
- assert_series_equal(sorted_series, self.ts)
-
- self.assertRaises(ValueError, lambda: random_order.sort_values(axis=1))
-
- sorted_series = random_order.sort_index(level=0, axis=0)
- assert_series_equal(sorted_series, self.ts)
-
- self.assertRaises(ValueError,
- lambda: random_order.sort_index(level=0, axis=1))
-
- def test_order(self):
-
- # 9816 deprecated
- with tm.assert_produces_warning(FutureWarning):
- self.ts.order()
-
- ts = self.ts.copy()
- ts[:5] = np.NaN
- vals = ts.values
-
- result = ts.sort_values()
- self.assertTrue(np.isnan(result[-5:]).all())
- self.assert_numpy_array_equal(result[:-5].values, np.sort(vals[5:]))
-
- result = ts.sort_values(na_position='first')
- self.assertTrue(np.isnan(result[:5]).all())
- self.assert_numpy_array_equal(result[5:].values, np.sort(vals[5:]))
-
- # something object-type
- ser = Series(['A', 'B'], [1, 2])
- # no failure
- ser.sort_values()
-
- # ascending=False
- ordered = ts.sort_values(ascending=False)
- expected = np.sort(ts.valid().values)[::-1]
- assert_almost_equal(expected, ordered.valid().values)
- ordered = ts.sort_values(ascending=False, na_position='first')
- assert_almost_equal(expected, ordered.valid().values)
-
def test_nsmallest_nlargest(self):
# float, int, datetime64 (use i8), timedelts64 (same),
# object that are numbers, object that are strings
diff --git a/pandas/tests/series/test_sorting.py b/pandas/tests/series/test_sorting.py
new file mode 100644
index 0000000000000..826201adbdb50
--- /dev/null
+++ b/pandas/tests/series/test_sorting.py
@@ -0,0 +1,146 @@
+# coding=utf-8
+
+import numpy as np
+import random
+
+from pandas import (DataFrame, Series, MultiIndex)
+
+from pandas.util.testing import (assert_series_equal, assert_almost_equal)
+import pandas.util.testing as tm
+
+from .common import TestData
+
+
+class TestSeriesSorting(TestData, tm.TestCase):
+
+ _multiprocess_can_split_ = True
+
+ def test_sort(self):
+
+ ts = self.ts.copy()
+
+ # 9816 deprecated
+ with tm.assert_produces_warning(FutureWarning):
+ ts.sort() # sorts inplace
+ self.assert_series_equal(ts, self.ts.sort_values())
+
+ def test_order(self):
+
+ # 9816 deprecated
+ with tm.assert_produces_warning(FutureWarning):
+ result = self.ts.order()
+ self.assert_series_equal(result, self.ts.sort_values())
+
+ def test_sort_values(self):
+
+ # check indexes are reordered corresponding with the values
+ ser = Series([3, 2, 4, 1], ['A', 'B', 'C', 'D'])
+ expected = Series([1, 2, 3, 4], ['D', 'B', 'A', 'C'])
+ result = ser.sort_values()
+ self.assert_series_equal(expected, result)
+
+ ts = self.ts.copy()
+ ts[:5] = np.NaN
+ vals = ts.values
+
+ result = ts.sort_values()
+ self.assertTrue(np.isnan(result[-5:]).all())
+ self.assert_numpy_array_equal(result[:-5].values, np.sort(vals[5:]))
+
+ # na_position
+ result = ts.sort_values(na_position='first')
+ self.assertTrue(np.isnan(result[:5]).all())
+ self.assert_numpy_array_equal(result[5:].values, np.sort(vals[5:]))
+
+ # something object-type
+ ser = Series(['A', 'B'], [1, 2])
+ # no failure
+ ser.sort_values()
+
+ # ascending=False
+ ordered = ts.sort_values(ascending=False)
+ expected = np.sort(ts.valid().values)[::-1]
+ assert_almost_equal(expected, ordered.valid().values)
+ ordered = ts.sort_values(ascending=False, na_position='first')
+ assert_almost_equal(expected, ordered.valid().values)
+
+ # inplace=True
+ ts = self.ts.copy()
+ ts.sort_values(ascending=False, inplace=True)
+ self.assert_series_equal(ts, self.ts.sort_values(ascending=False))
+ self.assert_index_equal(ts.index,
+ self.ts.sort_values(ascending=False).index)
+
+ # GH 5856/5853
+ # Series.sort_values operating on a view
+ df = DataFrame(np.random.randn(10, 4))
+ s = df.iloc[:, 0]
+
+ def f():
+ s.sort_values(inplace=True)
+
+ self.assertRaises(ValueError, f)
+
+ def test_sort_index(self):
+ rindex = list(self.ts.index)
+ random.shuffle(rindex)
+
+ random_order = self.ts.reindex(rindex)
+ sorted_series = random_order.sort_index()
+ assert_series_equal(sorted_series, self.ts)
+
+ # descending
+ sorted_series = random_order.sort_index(ascending=False)
+ assert_series_equal(sorted_series,
+ self.ts.reindex(self.ts.index[::-1]))
+
+ # compat on level
+ sorted_series = random_order.sort_index(level=0)
+ assert_series_equal(sorted_series, self.ts)
+
+ # compat on axis
+ sorted_series = random_order.sort_index(axis=0)
+ assert_series_equal(sorted_series, self.ts)
+
+ self.assertRaises(ValueError, lambda: random_order.sort_values(axis=1))
+
+ sorted_series = random_order.sort_index(level=0, axis=0)
+ assert_series_equal(sorted_series, self.ts)
+
+ self.assertRaises(ValueError,
+ lambda: random_order.sort_index(level=0, axis=1))
+
+ def test_sort_index_inplace(self):
+
+ # For #11402
+ rindex = list(self.ts.index)
+ random.shuffle(rindex)
+
+ # descending
+ random_order = self.ts.reindex(rindex)
+ result = random_order.sort_index(ascending=False, inplace=True)
+ self.assertIs(result, None,
+ msg='sort_index() inplace should return None')
+ assert_series_equal(random_order, self.ts.reindex(self.ts.index[::-1]))
+
+ # ascending
+ random_order = self.ts.reindex(rindex)
+ result = random_order.sort_index(ascending=True, inplace=True)
+ self.assertIs(result, None,
+ msg='sort_index() inplace should return None')
+ assert_series_equal(random_order, self.ts)
+
+ def test_sort_index_multiindex(self):
+
+ mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC'))
+ s = Series([1, 2], mi)
+ backwards = s.iloc[[1, 0]]
+
+ # implicit sort_remaining=True
+ res = s.sort_index(level='A')
+ assert_series_equal(backwards, res)
+
+ # GH13496
+ # rows share same level='A': sort has no effect without remaining lvls
+ res = s.sort_index(level='A', sort_remaining=False)
+ assert_series_equal(s, res)
| Before taking a stab at #10806, I looked at the relevant tests and they didn't make much sense to me.
This commit fixes:
- Some expected results were obtained by running code identical to what's being tested. (i.e. test could never fail)
- Removed duplicate assertions testing the same functionality both in `test_sort_index` and in `test_sort_values`.
- Switched the names `test_sort_index` and `test_sort_values` based on the (remaining) assertions being tested in their bodies.
Still confusing, unfixed:
- The `DataFrame.sort_index` docstring doesn't specify what happens when `level` is unspecified and `sort_remaining=False`. However there is a legacy test using this case: it's not clear to me what result or behavior is being tested.
First time pandas contributor, so I'd appreciate a good critical review!
- [X] tests pass
- [X] passes `git diff upstream/master | flake8 --diff`
- [x] ensure default options in docstrings
| https://api.github.com/repos/pandas-dev/pandas/pulls/13496 | 2016-06-22T07:15:59Z | 2016-07-11T18:21:06Z | 2016-07-11T18:21:06Z | 2016-07-11T18:47:08Z |
TST: Fix MMapWrapper init test for Windows | diff --git a/pandas/io/tests/test_common.py b/pandas/io/tests/test_common.py
index 46c34abf5aeb7..cf5ec7d911051 100644
--- a/pandas/io/tests/test_common.py
+++ b/pandas/io/tests/test_common.py
@@ -1,7 +1,6 @@
"""
Tests for the pandas.io.common functionalities
"""
-import nose
import mmap
import os
from os.path import isabs
@@ -98,15 +97,18 @@ def setUp(self):
'test_mmap.csv')
def test_constructor_bad_file(self):
- if is_platform_windows():
- raise nose.SkipTest("skipping construction error messages "
- "tests on windows")
-
non_file = StringIO('I am not a file')
non_file.fileno = lambda: -1
- msg = "Invalid argument"
- tm.assertRaisesRegexp(mmap.error, msg, common.MMapWrapper, non_file)
+ # the error raised is different on Windows
+ if is_platform_windows():
+ msg = "The parameter is incorrect"
+ err = OSError
+ else:
+ msg = "Invalid argument"
+ err = mmap.error
+
+ tm.assertRaisesRegexp(err, msg, common.MMapWrapper, non_file)
target = open(self.mmap_file, 'r')
target.close()
| Turns out Windows errors differently when an invalid `fileno` is passed into the `mmap` constructor, so there's no need to skip the test (xref: 9670b31).
| https://api.github.com/repos/pandas-dev/pandas/pulls/13494 | 2016-06-21T08:58:04Z | 2016-06-22T10:08:30Z | null | 2016-06-22T12:38:48Z |
DOC/WIP: travis notebook doc build | diff --git a/ci/requirements-2.7_DOC_BUILD.run b/ci/requirements-2.7_DOC_BUILD.run
index 507ce9ea5aac5..b87a41df4191d 100644
--- a/ci/requirements-2.7_DOC_BUILD.run
+++ b/ci/requirements-2.7_DOC_BUILD.run
@@ -1,7 +1,9 @@
ipython
+ipykernel
sphinx
nbconvert
nbformat
+notebook
matplotlib
scipy
lxml
diff --git a/doc/make.py b/doc/make.py
index 8e7b1d95dbafb..d46be2611ce3d 100755
--- a/doc/make.py
+++ b/doc/make.py
@@ -193,7 +193,8 @@ def html():
executed = execute_nb(nb, nb + '.executed', allow_errors=True,
kernel_name=kernel_name)
convert_nb(executed, nb.rstrip('.ipynb') + '.html')
- except (ImportError, IndexError):
+ except (ImportError, IndexError) as e:
+ print(e)
print("Failed to convert %s" % nb)
if os.system('sphinx-build -P -b html -d build/doctrees '
| Just want to see the output log from the pull request.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13493 | 2016-06-21T01:05:58Z | 2016-06-23T13:28:02Z | 2016-06-23T13:28:02Z | 2017-04-05T02:06:58Z |
DOC: find kernelspec | diff --git a/doc/make.py b/doc/make.py
index 05bf618ee677e..8e7b1d95dbafb 100755
--- a/doc/make.py
+++ b/doc/make.py
@@ -116,6 +116,11 @@ def cleanup_nb(nb):
pass
+def get_kernel():
+ """Find the kernel name for your python version"""
+ return 'python%s' % sys.version_info.major
+
+
def execute_nb(src, dst, allow_errors=False, timeout=1000, kernel_name=''):
"""
Execute notebook in `src` and write the output to `dst`
@@ -184,10 +189,12 @@ def html():
with cleanup_nb(nb):
try:
print("Converting %s" % nb)
- executed = execute_nb(nb, nb + '.executed', allow_errors=True)
+ kernel_name = get_kernel()
+ executed = execute_nb(nb, nb + '.executed', allow_errors=True,
+ kernel_name=kernel_name)
convert_nb(executed, nb.rstrip('.ipynb') + '.html')
- except ImportError:
- pass
+ except (ImportError, IndexError):
+ print("Failed to convert %s" % nb)
if os.system('sphinx-build -P -b html -d build/doctrees '
'source build/html'):
@@ -199,6 +206,7 @@ def html():
except:
pass
+
def zip_html():
try:
print("\nZipping up HTML docs...")
| xref https://github.com/pydata/pandas/pull/13487
Problem was the notebook was written with a python3 kernel, but the doc build is python2.
There are potentially problems here if you have non-default kernel names in your environment...
| https://api.github.com/repos/pandas-dev/pandas/pulls/13491 | 2016-06-20T12:29:13Z | 2016-06-20T14:34:09Z | 2016-06-20T14:34:09Z | 2017-04-05T02:06:52Z |
DOC: add nbformat for notebook conversion | diff --git a/ci/requirements-2.7_DOC_BUILD.run b/ci/requirements-2.7_DOC_BUILD.run
index 854776762fdb5..507ce9ea5aac5 100644
--- a/ci/requirements-2.7_DOC_BUILD.run
+++ b/ci/requirements-2.7_DOC_BUILD.run
@@ -1,6 +1,7 @@
ipython
sphinx
nbconvert
+nbformat
matplotlib
scipy
lxml
| The build [here](https://travis-ci.org/pydata/pandas/jobs/138633677#L1866) didn't succeed. Had `nbconvert` in the debs, but not `nbformat`.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13487 | 2016-06-19T13:23:43Z | 2016-06-19T14:31:41Z | null | 2017-04-05T02:06:51Z |
ENH:read_html() handles tables with multiple header rows #13434 | diff --git a/pandas/io/html.py b/pandas/io/html.py
index 3c38dae91eb89..83344f2f6992e 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -355,7 +355,8 @@ def _parse_raw_thead(self, table):
thead = self._parse_thead(table)
res = []
if thead:
- res = lmap(self._text_getter, self._parse_th(thead[0]))
+ row = self._parse_th(thead[0])[0].find_parent('tr')
+ res = lmap(self._text_getter, self._parse_th(row))
return np.atleast_1d(
np.array(res).squeeze()) if res and len(res) == 1 else res
@@ -591,7 +592,7 @@ def _parse_tfoot(self, table):
return table.xpath('.//tfoot')
def _parse_raw_thead(self, table):
- expr = './/thead//th'
+ expr = './/thead//tr[th][1]//th'
return [_remove_whitespace(x.text_content()) for x in
table.xpath(expr)]
diff --git a/pandas/io/tests/test_html.py b/pandas/io/tests/test_html.py
index 7b4e775db9476..030444d7c807a 100644
--- a/pandas/io/tests/test_html.py
+++ b/pandas/io/tests/test_html.py
@@ -694,6 +694,7 @@ def test_bool_header_arg(self):
with tm.assertRaises(TypeError):
read_html(self.spam_data, header=arg)
+
def test_converters(self):
# GH 13461
html_data = """<table>
@@ -760,6 +761,34 @@ def test_keep_default_na(self):
html_df = read_html(html_data, keep_default_na=True)[0]
tm.assert_frame_equal(expected_df, html_df)
+ def test_multiple_header(self):
+ data = StringIO('''<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th>Name</th>
+ <th>Age</th>
+ <th>Party</th>
+ </tr>
+ <tr>
+ <th></th>
+ <th>Gender</th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>Hillary</th>
+ <td>68</td>
+ <td>D</td>
+ </tr>
+ </tbody>
+ </table>''')
+ expected = DataFrame(columns=["Name", "Age", "Party"],
+ data=[("Hillary", 68, "D")])
+ result = self.read_html(data)[0]
+ tm.assert_frame_equal(expected, result)
+
+
def _lang_enc(filename):
return os.path.splitext(os.path.basename(filename))[0].split('_')
| - [ ] closes #13434
- [ ] read_html() handles tables with multiple header rows
Now it produces the following output for the issue
Unnamed: 0 Age Party
0 Hillary 68 D
1 Bernie 74 D
2 Donald 69 R
| https://api.github.com/repos/pandas-dev/pandas/pulls/13485 | 2016-06-19T05:52:13Z | 2016-11-16T22:27:09Z | null | 2016-11-16T22:27:09Z |
BUG: is_normalized returned False for local tz | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index c8a8b8eb0547b..a2f7981a85aa8 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -524,3 +524,5 @@ Bug Fixes
- Bug in ``Categorical.remove_unused_categories()`` changes ``.codes`` dtype to platform int (:issue:`13261`)
+
+- Bug in ``DatetimeIndex.is_normalized`` returns False for normalized date_range in case of local timezones (:issue:`13459`)
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 9c13162bd774c..ab5362da21a7d 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -34,7 +34,8 @@
assert_panel_equal,
assert_frame_equal,
assert_series_equal,
- assert_produces_warning)
+ assert_produces_warning,
+ set_timezone)
from pandas import concat, Timestamp
from pandas import compat
from pandas.compat import range, lrange, u
@@ -5309,14 +5310,6 @@ def test_store_timezone(self):
# issue storing datetime.date with a timezone as it resets when read
# back in a new timezone
- import platform
- if platform.system() == "Windows":
- raise nose.SkipTest("timezone setting not supported on windows")
-
- import datetime
- import time
- import os
-
# original method
with ensure_clean_store(self.path) as store:
@@ -5327,34 +5320,17 @@ def test_store_timezone(self):
assert_frame_equal(result, df)
# with tz setting
- orig_tz = os.environ.get('TZ')
-
- def setTZ(tz):
- if tz is None:
- try:
- del os.environ['TZ']
- except:
- pass
- else:
- os.environ['TZ'] = tz
- time.tzset()
-
- try:
-
- with ensure_clean_store(self.path) as store:
+ with ensure_clean_store(self.path) as store:
- setTZ('EST5EDT')
+ with set_timezone('EST5EDT'):
today = datetime.date(2013, 9, 10)
df = DataFrame([1, 2, 3], index=[today, today, today])
store['obj1'] = df
- setTZ('CST6CDT')
+ with set_timezone('CST6CDT'):
result = store['obj1']
- assert_frame_equal(result, df)
-
- finally:
- setTZ(orig_tz)
+ assert_frame_equal(result, df)
def test_legacy_datetimetz_object(self):
# legacy from < 0.17.0
diff --git a/pandas/tseries/tests/test_timezones.py b/pandas/tseries/tests/test_timezones.py
index afe9d0652db19..d68ff793c9b6a 100644
--- a/pandas/tseries/tests/test_timezones.py
+++ b/pandas/tseries/tests/test_timezones.py
@@ -18,7 +18,7 @@
import pandas.util.testing as tm
from pandas.types.api import DatetimeTZDtype
-from pandas.util.testing import assert_frame_equal
+from pandas.util.testing import assert_frame_equal, set_timezone
from pandas.compat import lrange, zip
try:
@@ -1398,6 +1398,26 @@ def test_normalize_tz(self):
self.assertTrue(result.is_normalized)
self.assertFalse(rng.is_normalized)
+ def test_normalize_tz_local(self):
+ # GH 13459
+ from dateutil.tz import tzlocal
+
+ timezones = ['US/Pacific', 'US/Eastern', 'UTC', 'Asia/Kolkata',
+ 'Asia/Shanghai', 'Australia/Canberra']
+
+ for timezone in timezones:
+ with set_timezone(timezone):
+ rng = date_range('1/1/2000 9:30', periods=10, freq='D',
+ tz=tzlocal())
+
+ result = rng.normalize()
+ expected = date_range('1/1/2000', periods=10, freq='D',
+ tz=tzlocal())
+ self.assert_index_equal(result, expected)
+
+ self.assertTrue(result.is_normalized)
+ self.assertFalse(rng.is_normalized)
+
def test_tzaware_offset(self):
dates = date_range('2012-11-01', periods=3, tz='US/Pacific')
offset = dates + offsets.Hour(5)
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index 7de62fbe71615..8837881af0b6c 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -4810,12 +4810,10 @@ def dates_normalized(ndarray[int64_t] stamps, tz=None):
elif _is_tzlocal(tz):
for i in range(n):
pandas_datetime_to_datetimestruct(stamps[i], PANDAS_FR_ns, &dts)
- if (dts.min + dts.sec + dts.us) > 0:
- return False
dt = datetime(dts.year, dts.month, dts.day, dts.hour, dts.min,
dts.sec, dts.us, tz)
dt = dt + tz.utcoffset(dt)
- if dt.hour > 0:
+ if (dt.hour + dt.minute + dt.second + dt.microsecond) > 0:
return False
else:
trans, deltas, typ = _get_dst_info(tz)
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 8c4d2f838ee8d..2961b2fb2241f 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -2667,3 +2667,50 @@ def patch(ob, attr, value):
delattr(ob, attr)
else:
setattr(ob, attr, old)
+
+
+@contextmanager
+def set_timezone(tz):
+ """Context manager for temporarily setting a timezone.
+
+ Parameters
+ ----------
+ tz : str
+ A string representing a valid timezone.
+
+ Examples
+ --------
+
+ >>> from datetime import datetime
+ >>> from dateutil.tz import tzlocal
+ >>> tzlocal().tzname(datetime.now())
+ 'IST'
+
+ >>> with set_timezone('US/Eastern'):
+ ... tzlocal().tzname(datetime.now())
+ ...
+ 'EDT'
+ """
+ if is_platform_windows():
+ import nose
+ raise nose.SkipTest("timezone setting not supported on windows")
+
+ import os
+ import time
+
+ def setTZ(tz):
+ if tz is None:
+ try:
+ del os.environ['TZ']
+ except:
+ pass
+ else:
+ os.environ['TZ'] = tz
+ time.tzset()
+
+ orig_tz = os.environ.get('TZ')
+ setTZ(tz)
+ try:
+ yield
+ finally:
+ setTZ(orig_tz)
| - [x] closes #13459
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
is_normalized returned False for normalized date_range
in case of local timezone 'Asia/Kolkata'
Fixes: #13459
| https://api.github.com/repos/pandas-dev/pandas/pulls/13484 | 2016-06-19T03:02:19Z | 2016-06-21T12:43:02Z | null | 2016-06-21T12:43:08Z |
BUG: windows with TemporaryFile an read_csv #13398 | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 8a14765aa6df2..9a4d39b4e3390 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -493,6 +493,7 @@ Bug Fixes
- Bug in ``pd.read_csv()`` in which the ``nrows`` argument was not properly validated for both engines (:issue:`10476`)
- Bug in ``pd.read_csv()`` with ``engine='python'`` in which infinities of mixed-case forms were not being interpreted properly (:issue:`13274`)
- Bug in ``pd.read_csv()`` with ``engine='python'`` in which trailing ``NaN`` values were not being parsed (:issue:`13320`)
+- Bug in ``pd.read_csv()`` with ``engine='python'`` when reading from a tempfile.TemporaryFile on Windows with Python 3, separator expressed as a regex (:issue:`13398`)
- Bug in ``pd.read_csv()`` that prevents ``usecols`` kwarg from accepting single-byte unicode strings (:issue:`13219`)
- Bug in ``pd.read_csv()`` that prevents ``usecols`` from being an empty set (:issue:`13402`)
- Bug in ``pd.read_csv()`` with ``engine=='c'`` in which null ``quotechar`` was not accepted even though ``quoting`` was specified as ``None`` (:issue:`13411`)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 9baff67845dac..dc9455289b757 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -1868,7 +1868,7 @@ class MyDialect(csv.Dialect):
else:
def _read():
- line = next(f)
+ line = f.readline()
pat = re.compile(sep)
yield pat.split(line.strip())
for line in f:
diff --git a/pandas/io/tests/parser/python_parser_only.py b/pandas/io/tests/parser/python_parser_only.py
index a08cb36c13f80..6f0ea75c4da93 100644
--- a/pandas/io/tests/parser/python_parser_only.py
+++ b/pandas/io/tests/parser/python_parser_only.py
@@ -171,3 +171,17 @@ def test_read_table_buglet_4x_multiindex(self):
columns=list('abcABC'), index=list('abc'))
actual = self.read_table(StringIO(data), sep='\s+')
tm.assert_frame_equal(actual, expected)
+
+ def test_temporary_file(self):
+ # GH13398
+ data1 = "0 0"
+
+ from tempfile import TemporaryFile
+ new_file = TemporaryFile("w+")
+ new_file.write(data1)
+ new_file.flush()
+ new_file.seek(0)
+
+ result = self.read_csv(new_file, sep=r"\s*", header=None)
+ expected = DataFrame([[0, 0]])
+ tm.assert_frame_equal(result, expected)
| - [ x] closes #13398
- [ x] no tests added -> could not find location for IO tests?
- [ x] passes `git diff upstream/master | flake8 --diff`
Change the way of reading back to readline (consistent with the test before entering the function)
One failure on Windows 10 (Python 3.5), but expected to fail actually (should probably tag it as well?)
```
======================================================================
FAIL: test_next (pandas.io.tests.test_common.TestMMapWrapper)
----------------------------------------------------------------------
Traceback (most recent call last):
File "G:\Informatique\pandas\pandas\io\tests\test_common.py", line 139, in test_next
self.assertEqual(next_line, line)
AssertionError: 'a,b,c\r\n' != 'a,b,c\n'
- a,b,c
? -
+ a,b,c
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/13481 | 2016-06-18T16:09:28Z | 2016-06-22T10:21:28Z | null | 2016-06-22T10:21:40Z |
BUG: Series/Index results in datetime/timedelta incorrectly if inputs are all nan/nat like | diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt
index 70c466ed51681..39cb4a26d413c 100644
--- a/doc/source/whatsnew/v0.19.0.txt
+++ b/doc/source/whatsnew/v0.19.0.txt
@@ -484,8 +484,9 @@ Bug Fixes
- Bug in ``Series.str.extractall()`` with ``str`` index raises ``ValueError`` (:issue:`13156`)
- Bug in ``Series.str.extractall()`` with single group and quantifier (:issue:`13382`)
-
- Bug in ``DatetimeIndex`` and ``Period`` subtraction raises ``ValueError`` or ``AttributeError`` rather than ``TypeError`` (:issue:`13078`)
+- Bug in ``Index`` and ``Series`` created with ``NaN`` and ``NaT`` mixed data may not have ``datetime64`` dtype (:issue:`13324`)
+- Bug in ``Index`` and ``Series`` may ignore ``np.datetime64('nat')`` and ``np.timdelta64('nat')`` to infer dtype (:issue:`13324`)
- Bug in ``PeriodIndex`` and ``Period`` subtraction raises ``AttributeError`` (:issue:`13071`)
- Bug in ``PeriodIndex`` construction returning a ``float64`` index in some circumstances (:issue:`13067`)
- Bug in ``.resample(..)`` with a ``PeriodIndex`` not changing its ``freq`` appropriately when empty (:issue:`13067`)
diff --git a/pandas/indexes/base.py b/pandas/indexes/base.py
index ad27010714f63..2f0691f1d1bb0 100644
--- a/pandas/indexes/base.py
+++ b/pandas/indexes/base.py
@@ -242,8 +242,7 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None,
# don't support boolean explicity ATM
pass
elif inferred != 'string':
- if (inferred.startswith('datetime') or
- tslib.is_timestamp_array(subarr)):
+ if inferred.startswith('datetime'):
if (lib.is_datetime_with_singletz_array(subarr) or
'tz' in kwargs):
diff --git a/pandas/src/inference.pyx b/pandas/src/inference.pyx
index 234ac7ea2c60c..9f96037c97c62 100644
--- a/pandas/src/inference.pyx
+++ b/pandas/src/inference.pyx
@@ -103,6 +103,7 @@ def infer_dtype(object _values):
Py_ssize_t i, n
object val
ndarray values
+ bint seen_pdnat = False, seen_val = False
if isinstance(_values, np.ndarray):
values = _values
@@ -141,17 +142,34 @@ def infer_dtype(object _values):
values = values.ravel()
# try to use a valid value
- for i in range(n):
- val = util.get_value_1d(values, i)
- if not is_null_datetimelike(val):
- break
+ for i from 0 <= i < n:
+ val = util.get_value_1d(values, i)
- if util.is_datetime64_object(val) or val is NaT:
+ # do not use is_nul_datetimelike to keep
+ # np.datetime64('nat') and np.timedelta64('nat')
+ if util._checknull(val):
+ pass
+ elif val is NaT:
+ seen_pdnat = True
+ else:
+ seen_val = True
+ break
+
+ # if all values are nan/NaT
+ if seen_val is False and seen_pdnat is True:
+ return 'datetime'
+ # float/object nan is handled in latter logic
+
+ if util.is_datetime64_object(val):
if is_datetime64_array(values):
return 'datetime64'
elif is_timedelta_or_timedelta64_array(values):
return 'timedelta'
+ elif is_timedelta(val):
+ if is_timedelta_or_timedelta64_array(values):
+ return 'timedelta'
+
elif util.is_integer_object(val):
# a timedelta will show true here as well
if is_timedelta(val):
@@ -200,17 +218,15 @@ def infer_dtype(object _values):
if is_bytes_array(values):
return 'bytes'
- elif is_timedelta(val):
- if is_timedelta_or_timedelta64_array(values):
- return 'timedelta'
-
elif is_period(val):
if is_period_array(values):
return 'period'
for i in range(n):
val = util.get_value_1d(values, i)
- if util.is_integer_object(val):
+ if (util.is_integer_object(val) and
+ not util.is_timedelta64_object(val) and
+ not util.is_datetime64_object(val)):
return 'mixed-integer'
return 'mixed'
@@ -237,20 +253,46 @@ def is_possible_datetimelike_array(object arr):
return False
return seen_datetime or seen_timedelta
+
cdef inline bint is_null_datetimelike(v):
# determine if we have a null for a timedelta/datetime (or integer versions)x
if util._checknull(v):
return True
+ elif v is NaT:
+ return True
elif util.is_timedelta64_object(v):
return v.view('int64') == iNaT
elif util.is_datetime64_object(v):
return v.view('int64') == iNaT
elif util.is_integer_object(v):
return v == iNaT
+ return False
+
+
+cdef inline bint is_null_datetime64(v):
+ # determine if we have a null for a datetime (or integer versions)x,
+ # excluding np.timedelta64('nat')
+ if util._checknull(v):
+ return True
+ elif v is NaT:
+ return True
+ elif util.is_datetime64_object(v):
+ return v.view('int64') == iNaT
+ return False
+
+
+cdef inline bint is_null_timedelta64(v):
+ # determine if we have a null for a timedelta (or integer versions)x,
+ # excluding np.datetime64('nat')
+ if util._checknull(v):
+ return True
elif v is NaT:
return True
+ elif util.is_timedelta64_object(v):
+ return v.view('int64') == iNaT
return False
+
cdef inline bint is_datetime(object o):
return PyDateTime_Check(o)
@@ -420,7 +462,7 @@ def is_datetime_array(ndarray[object] values):
# return False for all nulls
for i in range(n):
v = values[i]
- if is_null_datetimelike(v):
+ if is_null_datetime64(v):
# we are a regular null
if util._checknull(v):
null_count += 1
@@ -437,7 +479,7 @@ def is_datetime64_array(ndarray values):
# return False for all nulls
for i in range(n):
v = values[i]
- if is_null_datetimelike(v):
+ if is_null_datetime64(v):
# we are a regular null
if util._checknull(v):
null_count += 1
@@ -481,7 +523,7 @@ def is_timedelta_array(ndarray values):
return False
for i in range(n):
v = values[i]
- if is_null_datetimelike(v):
+ if is_null_timedelta64(v):
# we are a regular null
if util._checknull(v):
null_count += 1
@@ -496,7 +538,7 @@ def is_timedelta64_array(ndarray values):
return False
for i in range(n):
v = values[i]
- if is_null_datetimelike(v):
+ if is_null_timedelta64(v):
# we are a regular null
if util._checknull(v):
null_count += 1
@@ -512,7 +554,7 @@ def is_timedelta_or_timedelta64_array(ndarray values):
return False
for i in range(n):
v = values[i]
- if is_null_datetimelike(v):
+ if is_null_timedelta64(v):
# we are a regular null
if util._checknull(v):
null_count += 1
diff --git a/pandas/src/util.pxd b/pandas/src/util.pxd
index 96a23a91cc7c2..fcb5583a0a6e7 100644
--- a/pandas/src/util.pxd
+++ b/pandas/src/util.pxd
@@ -98,4 +98,4 @@ cdef inline bint _checknan(object val):
return not cnp.PyArray_Check(val) and val != val
cdef inline bint is_period_object(object val):
- return getattr(val,'_typ','_typ') == 'period'
+ return getattr(val, '_typ', '_typ') == 'period'
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index d535eaa238567..67869901b068e 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -203,6 +203,49 @@ def __array__(self, dtype=None):
result = pd.Index(ArrayLike(array))
self.assert_index_equal(result, expected)
+ def test_index_ctor_infer_nan_nat(self):
+ # GH 13467
+ exp = pd.Float64Index([np.nan, np.nan])
+ self.assertEqual(exp.dtype, np.float64)
+ tm.assert_index_equal(Index([np.nan, np.nan]), exp)
+ tm.assert_index_equal(Index(np.array([np.nan, np.nan])), exp)
+
+ exp = pd.DatetimeIndex([pd.NaT, pd.NaT])
+ self.assertEqual(exp.dtype, 'datetime64[ns]')
+ tm.assert_index_equal(Index([pd.NaT, pd.NaT]), exp)
+ tm.assert_index_equal(Index(np.array([pd.NaT, pd.NaT])), exp)
+
+ exp = pd.DatetimeIndex([pd.NaT, pd.NaT])
+ self.assertEqual(exp.dtype, 'datetime64[ns]')
+
+ for data in [[pd.NaT, np.nan], [np.nan, pd.NaT],
+ [np.nan, np.datetime64('nat')],
+ [np.datetime64('nat'), np.nan]]:
+ tm.assert_index_equal(Index(data), exp)
+ tm.assert_index_equal(Index(np.array(data, dtype=object)), exp)
+
+ exp = pd.TimedeltaIndex([pd.NaT, pd.NaT])
+ self.assertEqual(exp.dtype, 'timedelta64[ns]')
+
+ for data in [[np.nan, np.timedelta64('nat')],
+ [np.timedelta64('nat'), np.nan],
+ [pd.NaT, np.timedelta64('nat')],
+ [np.timedelta64('nat'), pd.NaT]]:
+
+ tm.assert_index_equal(Index(data), exp)
+ tm.assert_index_equal(Index(np.array(data, dtype=object)), exp)
+
+ # mixed np.datetime64/timedelta64 nat results in object
+ data = [np.datetime64('nat'), np.timedelta64('nat')]
+ exp = pd.Index(data, dtype=object)
+ tm.assert_index_equal(Index(data), exp)
+ tm.assert_index_equal(Index(np.array(data, dtype=object)), exp)
+
+ data = [np.timedelta64('nat'), np.datetime64('nat')]
+ exp = pd.Index(data, dtype=object)
+ tm.assert_index_equal(Index(data), exp)
+ tm.assert_index_equal(Index(np.array(data, dtype=object)), exp)
+
def test_index_ctor_infer_periodindex(self):
xp = period_range('2012-1-1', freq='M', periods=3)
rs = Index(xp)
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index c632704b7c5eb..2a7e8a957977f 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -252,6 +252,24 @@ def test_constructor_pass_none(self):
expected = Series(index=Index([None]))
assert_series_equal(s, expected)
+ def test_constructor_pass_nan_nat(self):
+ # GH 13467
+ exp = Series([np.nan, np.nan], dtype=np.float64)
+ self.assertEqual(exp.dtype, np.float64)
+ tm.assert_series_equal(Series([np.nan, np.nan]), exp)
+ tm.assert_series_equal(Series(np.array([np.nan, np.nan])), exp)
+
+ exp = Series([pd.NaT, pd.NaT])
+ self.assertEqual(exp.dtype, 'datetime64[ns]')
+ tm.assert_series_equal(Series([pd.NaT, pd.NaT]), exp)
+ tm.assert_series_equal(Series(np.array([pd.NaT, pd.NaT])), exp)
+
+ tm.assert_series_equal(Series([pd.NaT, np.nan]), exp)
+ tm.assert_series_equal(Series(np.array([pd.NaT, np.nan])), exp)
+
+ tm.assert_series_equal(Series([np.nan, pd.NaT]), exp)
+ tm.assert_series_equal(Series(np.array([np.nan, pd.NaT])), exp)
+
def test_constructor_cast(self):
self.assertRaises(ValueError, Series, ['a', 'b', 'c'], dtype=float)
@@ -688,8 +706,9 @@ def test_constructor_dtype_timedelta64(self):
td = Series([np.timedelta64(300000000), pd.NaT])
self.assertEqual(td.dtype, 'timedelta64[ns]')
+ # because iNaT is int, not coerced to timedelta
td = Series([np.timedelta64(300000000), tslib.iNaT])
- self.assertEqual(td.dtype, 'timedelta64[ns]')
+ self.assertEqual(td.dtype, 'object')
td = Series([np.timedelta64(300000000), np.nan])
self.assertEqual(td.dtype, 'timedelta64[ns]')
diff --git a/pandas/tests/test_infer_and_convert.py b/pandas/tests/test_infer_and_convert.py
index a6941369b35be..5f016322f101f 100644
--- a/pandas/tests/test_infer_and_convert.py
+++ b/pandas/tests/test_infer_and_convert.py
@@ -180,6 +180,207 @@ def test_datetime(self):
index = Index(dates)
self.assertEqual(index.inferred_type, 'datetime64')
+ def test_infer_dtype_datetime(self):
+
+ arr = np.array([pd.Timestamp('2011-01-01'),
+ pd.Timestamp('2011-01-02')])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'datetime')
+
+ arr = np.array([np.datetime64('2011-01-01'),
+ np.datetime64('2011-01-01')], dtype=object)
+ self.assertEqual(pd.lib.infer_dtype(arr), 'datetime64')
+
+ arr = np.array([datetime(2011, 1, 1), datetime(2012, 2, 1)])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'datetime')
+
+ # starts with nan
+ for n in [pd.NaT, np.nan]:
+ arr = np.array([n, pd.Timestamp('2011-01-02')])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'datetime')
+
+ arr = np.array([n, np.datetime64('2011-01-02')])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'datetime64')
+
+ arr = np.array([n, datetime(2011, 1, 1)])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'datetime')
+
+ arr = np.array([n, pd.Timestamp('2011-01-02'), n])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'datetime')
+
+ arr = np.array([n, np.datetime64('2011-01-02'), n])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'datetime64')
+
+ arr = np.array([n, datetime(2011, 1, 1), n])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'datetime')
+
+ # different type of nat
+ arr = np.array([np.timedelta64('nat'),
+ np.datetime64('2011-01-02')], dtype=object)
+ self.assertEqual(pd.lib.infer_dtype(arr), 'mixed')
+
+ arr = np.array([np.datetime64('2011-01-02'),
+ np.timedelta64('nat')], dtype=object)
+ self.assertEqual(pd.lib.infer_dtype(arr), 'mixed')
+
+ # mixed datetime
+ arr = np.array([datetime(2011, 1, 1),
+ pd.Timestamp('2011-01-02')])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'datetime')
+
+ # should be datetime?
+ arr = np.array([np.datetime64('2011-01-01'),
+ pd.Timestamp('2011-01-02')])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'mixed')
+
+ arr = np.array([pd.Timestamp('2011-01-02'),
+ np.datetime64('2011-01-01')])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'mixed')
+
+ arr = np.array([np.nan, pd.Timestamp('2011-01-02'), 1])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'mixed-integer')
+
+ arr = np.array([np.nan, pd.Timestamp('2011-01-02'), 1.1])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'mixed')
+
+ arr = np.array([np.nan, '2011-01-01', pd.Timestamp('2011-01-02')])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'mixed')
+
+ def test_infer_dtype_timedelta(self):
+
+ arr = np.array([pd.Timedelta('1 days'),
+ pd.Timedelta('2 days')])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'timedelta')
+
+ arr = np.array([np.timedelta64(1, 'D'),
+ np.timedelta64(2, 'D')], dtype=object)
+ self.assertEqual(pd.lib.infer_dtype(arr), 'timedelta')
+
+ arr = np.array([timedelta(1), timedelta(2)])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'timedelta')
+
+ # starts with nan
+ for n in [pd.NaT, np.nan]:
+ arr = np.array([n, pd.Timedelta('1 days')])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'timedelta')
+
+ arr = np.array([n, np.timedelta64(1, 'D')])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'timedelta')
+
+ arr = np.array([n, timedelta(1)])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'timedelta')
+
+ arr = np.array([n, pd.Timedelta('1 days'), n])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'timedelta')
+
+ arr = np.array([n, np.timedelta64(1, 'D'), n])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'timedelta')
+
+ arr = np.array([n, timedelta(1), n])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'timedelta')
+
+ # different type of nat
+ arr = np.array([np.datetime64('nat'), np.timedelta64(1, 'D')],
+ dtype=object)
+ self.assertEqual(pd.lib.infer_dtype(arr), 'mixed')
+
+ arr = np.array([np.timedelta64(1, 'D'), np.datetime64('nat')],
+ dtype=object)
+ self.assertEqual(pd.lib.infer_dtype(arr), 'mixed')
+
+ def test_infer_dtype_all_nan_nat_like(self):
+ arr = np.array([np.nan, np.nan])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'floating')
+
+ # nan and None mix are result in mixed
+ arr = np.array([np.nan, np.nan, None])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'mixed')
+
+ arr = np.array([None, np.nan, np.nan])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'mixed')
+
+ # pd.NaT
+ arr = np.array([pd.NaT])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'datetime')
+
+ arr = np.array([pd.NaT, np.nan])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'datetime')
+
+ arr = np.array([np.nan, pd.NaT])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'datetime')
+
+ arr = np.array([np.nan, pd.NaT, np.nan])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'datetime')
+
+ arr = np.array([None, pd.NaT, None])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'datetime')
+
+ # np.datetime64(nat)
+ arr = np.array([np.datetime64('nat')])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'datetime64')
+
+ for n in [np.nan, pd.NaT, None]:
+ arr = np.array([n, np.datetime64('nat'), n])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'datetime64')
+
+ arr = np.array([pd.NaT, n, np.datetime64('nat'), n])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'datetime64')
+
+ arr = np.array([np.timedelta64('nat')], dtype=object)
+ self.assertEqual(pd.lib.infer_dtype(arr), 'timedelta')
+
+ for n in [np.nan, pd.NaT, None]:
+ arr = np.array([n, np.timedelta64('nat'), n])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'timedelta')
+
+ arr = np.array([pd.NaT, n, np.timedelta64('nat'), n])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'timedelta')
+
+ # datetime / timedelta mixed
+ arr = np.array([pd.NaT, np.datetime64('nat'),
+ np.timedelta64('nat'), np.nan])
+ self.assertEqual(pd.lib.infer_dtype(arr), 'mixed')
+
+ arr = np.array([np.timedelta64('nat'), np.datetime64('nat')],
+ dtype=object)
+ self.assertEqual(pd.lib.infer_dtype(arr), 'mixed')
+
+ def test_is_datetimelike_array_all_nan_nat_like(self):
+ arr = np.array([np.nan, pd.NaT, np.datetime64('nat')])
+ self.assertTrue(pd.lib.is_datetime_array(arr))
+ self.assertTrue(pd.lib.is_datetime64_array(arr))
+ self.assertFalse(pd.lib.is_timedelta_array(arr))
+ self.assertFalse(pd.lib.is_timedelta64_array(arr))
+ self.assertFalse(pd.lib.is_timedelta_or_timedelta64_array(arr))
+
+ arr = np.array([np.nan, pd.NaT, np.timedelta64('nat')])
+ self.assertFalse(pd.lib.is_datetime_array(arr))
+ self.assertFalse(pd.lib.is_datetime64_array(arr))
+ self.assertTrue(pd.lib.is_timedelta_array(arr))
+ self.assertTrue(pd.lib.is_timedelta64_array(arr))
+ self.assertTrue(pd.lib.is_timedelta_or_timedelta64_array(arr))
+
+ arr = np.array([np.nan, pd.NaT, np.datetime64('nat'),
+ np.timedelta64('nat')])
+ self.assertFalse(pd.lib.is_datetime_array(arr))
+ self.assertFalse(pd.lib.is_datetime64_array(arr))
+ self.assertFalse(pd.lib.is_timedelta_array(arr))
+ self.assertFalse(pd.lib.is_timedelta64_array(arr))
+ self.assertFalse(pd.lib.is_timedelta_or_timedelta64_array(arr))
+
+ arr = np.array([np.nan, pd.NaT])
+ self.assertTrue(pd.lib.is_datetime_array(arr))
+ self.assertTrue(pd.lib.is_datetime64_array(arr))
+ self.assertTrue(pd.lib.is_timedelta_array(arr))
+ self.assertTrue(pd.lib.is_timedelta64_array(arr))
+ self.assertTrue(pd.lib.is_timedelta_or_timedelta64_array(arr))
+
+ arr = np.array([np.nan, np.nan], dtype=object)
+ self.assertFalse(pd.lib.is_datetime_array(arr))
+ self.assertFalse(pd.lib.is_datetime64_array(arr))
+ self.assertFalse(pd.lib.is_timedelta_array(arr))
+ self.assertFalse(pd.lib.is_timedelta64_array(arr))
+ self.assertFalse(pd.lib.is_timedelta_or_timedelta64_array(arr))
+
def test_date(self):
dates = [date(2012, 1, x) for x in range(1, 20)]
@@ -244,6 +445,13 @@ def test_categorical(self):
result = lib.infer_dtype(Series(arr))
self.assertEqual(result, 'categorical')
+ def test_is_period(self):
+ self.assertTrue(lib.is_period(pd.Period('2011-01', freq='M')))
+ self.assertFalse(lib.is_period(pd.PeriodIndex(['2011-01'], freq='M')))
+ self.assertFalse(lib.is_period(pd.Timestamp('2011-01')))
+ self.assertFalse(lib.is_period(1))
+ self.assertFalse(lib.is_period(np.nan))
+
class TestConvert(tm.TestCase):
@@ -437,6 +645,7 @@ def test_convert_downcast_int64(self):
result = lib.downcast_int64(arr, na_values)
self.assert_numpy_array_equal(result, expected)
+
if __name__ == '__main__':
import nose
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index 62f8b10e3eea2..fe4de11864522 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -843,15 +843,6 @@ cdef _tz_format(object obj, object zone):
except:
return ', tz=%s' % zone
-def is_timestamp_array(ndarray[object] values):
- cdef int i, n = len(values)
- if n == 0:
- return False
- for i in range(n):
- if not is_timestamp(values[i]):
- return False
- return True
-
cpdef object get_value_box(ndarray arr, object loc):
cdef:
@@ -957,6 +948,7 @@ cdef str _NDIM_STRING = "ndim"
# (see Timestamp class above). This will serve as a C extension type that
# shadows the python class, where we do any heavy lifting.
cdef class _Timestamp(datetime):
+
cdef readonly:
int64_t value, nanosecond
object freq # frequency reference
| - [x] closes #13467
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/13477 | 2016-06-18T04:12:59Z | 2016-07-11T07:31:10Z | 2016-07-11T07:31:10Z | 2016-07-11T07:52:53Z |
BUG: fix to_datetime to handle int16 and int8 | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index db2bccf6ac349..8d2e9bf4c1ae6 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -517,3 +517,5 @@ Bug Fixes
- Bug in ``Categorical.remove_unused_categories()`` changes ``.codes`` dtype to platform int (:issue:`13261`)
+
+- Bug in ``DataFrame.to_datetime()`` raises ValueError in case of dtype ``int8`` and ``int16`` (:issue:`13451`)
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index fcc544ec7f239..b0caa1f6a77cb 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -2563,6 +2563,33 @@ def test_dataframe(self):
with self.assertRaises(ValueError):
to_datetime(df2)
+ def test_dataframe_dtypes(self):
+ # #13451
+ df = DataFrame({'year': [2015, 2016],
+ 'month': [2, 3],
+ 'day': [4, 5]})
+
+ # int16
+ result = to_datetime(df.astype('int16'))
+ expected = Series([Timestamp('20150204 00:00:00'),
+ Timestamp('20160305 00:00:00')])
+ assert_series_equal(result, expected)
+
+ # mixed dtypes
+ df['month'] = df['month'].astype('int8')
+ df['day'] = df['day'].astype('int8')
+ result = to_datetime(df)
+ expected = Series([Timestamp('20150204 00:00:00'),
+ Timestamp('20160305 00:00:00')])
+ assert_series_equal(result, expected)
+
+ # float
+ df = DataFrame({'year': [2000, 2001],
+ 'month': [1.5, 1],
+ 'day': [1, 1]})
+ with self.assertRaises(ValueError):
+ to_datetime(df)
+
class TestDatetimeIndex(tm.TestCase):
_multiprocess_can_split_ = True
diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py
index d5e87d1df2462..01b1c8a852215 100644
--- a/pandas/tseries/tools.py
+++ b/pandas/tseries/tools.py
@@ -508,7 +508,11 @@ def f(value):
def coerce(values):
# we allow coercion to if errors allows
- return to_numeric(values, errors=errors)
+ values = to_numeric(values, errors=errors)
+ # prevent overflow in case of int8 or int16
+ if com.is_integer_dtype(values):
+ values = values.astype('int64', copy=False)
+ return values
values = (coerce(arg[unit_rev['year']]) * 10000 +
coerce(arg[unit_rev['month']]) * 100 +
| - [x] closes #13451
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
Fixes #13451
| https://api.github.com/repos/pandas-dev/pandas/pulls/13464 | 2016-06-16T17:23:06Z | 2016-06-17T22:04:34Z | null | 2016-06-18T11:23:20Z |
BUG: Fix a bug when using DataFrame.to_records with unicode column names | diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 84dfe73adefe3..ba272fa364145 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -353,3 +353,4 @@ Bug Fixes
- Bug in ``pd.read_csv()`` for the C engine where ``usecols`` were being indexed incorrectly with ``parse_dates`` (:issue:`14792`)
+- Bug in ``pd.DataFrame.to_records`` which failed with unicode caracters in column names (:issue:`11879`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index d96fb094f5d5c..efbb5d6f892f5 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1105,13 +1105,17 @@ def to_records(self, index=True, convert_datetime64=True):
count += 1
elif index_names[0] is None:
index_names = ['index']
- names = lmap(str, index_names) + lmap(str, self.columns)
+ names = (lmap(compat.text_type, index_names) +
+ lmap(compat.text_type, self.columns))
else:
arrays = [self[c].get_values() for c in self.columns]
- names = lmap(str, self.columns)
+ names = lmap(compat.text_type, self.columns)
- dtype = np.dtype([(x, v.dtype) for x, v in zip(names, arrays)])
- return np.rec.fromarrays(arrays, dtype=dtype, names=names)
+ formats = [v.dtype for v in arrays]
+ return np.rec.fromarrays(
+ arrays,
+ dtype={'names': names, 'formats': formats}
+ )
@classmethod
def from_items(cls, items, columns=None, orient='columns'):
diff --git a/pandas/tests/frame/test_convert_to.py b/pandas/tests/frame/test_convert_to.py
index 53083a602e183..fb82b0598bb0a 100644
--- a/pandas/tests/frame/test_convert_to.py
+++ b/pandas/tests/frame/test_convert_to.py
@@ -179,3 +179,16 @@ def test_to_records_with_unicode_index(self):
.to_records()
expected = np.rec.array([('x', 'y')], dtype=[('a', 'O'), ('b', 'O')])
tm.assert_almost_equal(result, expected)
+
+ def test_to_records_with_unicode_column_names(self):
+ # Issue #11879. to_records used to raise an exception when used
+ # with column names containing non ascii caracters in Python 2
+ result = DataFrame(data={u"accented_name_é": [1.0]}).to_records()
+ # Note that numpy allows for unicode field names but dtypes need
+ # to be specified using dictionnary intsead of list of tuples.
+ expected = np.rec.array(
+ [(0, 1.0)],
+ dtype={"names": ["index", u"accented_name_é"],
+ "formats": ['<i8', '<f8']}
+ )
+ tm.assert_almost_equal(result, expected)
| Fix a bug when using DataFrame.to_records with unicode column names in python 2
- [x] closes #11879
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/13462 | 2016-06-16T13:19:19Z | 2017-02-27T19:44:37Z | null | 2017-02-27T19:45:10Z |
BUG: Fix for .str.replace with invalid input | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index b3ce9911d3f4d..c68ec8b4eafb8 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -424,3 +424,4 @@ Bug Fixes
- Bug in ``Categorical.remove_unused_categories()`` changes ``.codes`` dtype to platform int (:issue:`13261`)
+- Bug in ``.str.replace`` does not raise TypeError for invalid replacement (:issue:`13438`)
\ No newline at end of file
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index ca8e701d0ce17..8496aaa286ca7 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -4,7 +4,7 @@
from pandas.core.common import (isnull, notnull, _values_from_object,
is_bool_dtype,
is_list_like, is_categorical_dtype,
- is_object_dtype)
+ is_object_dtype, is_string_like)
from pandas.core.algorithms import take_1d
import pandas.compat as compat
from pandas.core.base import AccessorProperty, NoNewAttributesMixin
@@ -309,6 +309,8 @@ def str_replace(arr, pat, repl, n=-1, case=True, flags=0):
-------
replaced : Series/Index of objects
"""
+ if not is_string_like(repl): # Check whether repl is valid (GH 13438)
+ raise TypeError("repl must be a string")
use_re = not case or len(pat) > 1 or flags
if use_re:
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index 73f9809a7f042..012257bf13015 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -430,6 +430,13 @@ def test_replace(self):
result = values.str.replace("(?<=\w),(?=\w)", ", ", flags=re.UNICODE)
tm.assert_series_equal(result, exp)
+ # GH 13438
+ for pdClass in (Series, Index):
+ for repl in (None, 3, {'a': 'b'}):
+ for data in (['a', 'b', None], ['a', 'b', 'c', 'ad']):
+ values = pdClass(data)
+ self.assertRaises(TypeError, values.str.replace, 'a', repl)
+
def test_repeat(self):
values = Series(['a', 'b', NA, 'c', NA, 'd'])
| - [x] closes #13438
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
.str.replace now raises TypeError when replacement is invalid
| https://api.github.com/repos/pandas-dev/pandas/pulls/13460 | 2016-06-16T09:26:20Z | 2016-06-16T21:02:21Z | null | 2016-06-16T21:02:28Z |
In gbq, use googleapiclient instead of apiclient | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index b9afa7fcb7959..64644bd9a7a26 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -528,3 +528,5 @@ Bug Fixes
- Bug in ``Categorical.remove_unused_categories()`` changes ``.codes`` dtype to platform int (:issue:`13261`)
- Bug in ``groupby`` with ``as_index=False`` returns all NaN's when grouping on multiple columns including a categorical one (:issue:`13204`)
+
+- Bug where ``pd.read_gbq()`` could throw ``ImportError: No module named discovery`` as a result of a naming conflict with another python package called apiclient (:issue:`13454`)
diff --git a/pandas/io/gbq.py b/pandas/io/gbq.py
index e706434f29dc5..140f5cc6bb6e3 100644
--- a/pandas/io/gbq.py
+++ b/pandas/io/gbq.py
@@ -46,8 +46,12 @@ def _test_google_api_imports():
try:
import httplib2 # noqa
- from apiclient.discovery import build # noqa
- from apiclient.errors import HttpError # noqa
+ try:
+ from googleapiclient.discovery import build # noqa
+ from googleapiclient.errors import HttpError # noqa
+ except:
+ from apiclient.discovery import build # noqa
+ from apiclient.errors import HttpError # noqa
from oauth2client.client import AccessTokenRefreshError # noqa
from oauth2client.client import OAuth2WebServerFlow # noqa
from oauth2client.file import Storage # noqa
@@ -266,7 +270,10 @@ def sizeof_fmt(num, suffix='b'):
def get_service(self):
import httplib2
- from apiclient.discovery import build
+ try:
+ from googleapiclient.discovery import build
+ except:
+ from apiclient.discovery import build
http = httplib2.Http()
http = self.credentials.authorize(http)
@@ -315,7 +322,10 @@ def process_insert_errors(self, insert_errors):
raise StreamingInsertError
def run_query(self, query):
- from apiclient.errors import HttpError
+ try:
+ from googleapiclient.errors import HttpError
+ except:
+ from apiclient.errors import HttpError
from oauth2client.client import AccessTokenRefreshError
_check_google_client_version()
@@ -420,7 +430,10 @@ def run_query(self, query):
return schema, result_pages
def load_data(self, dataframe, dataset_id, table_id, chunksize):
- from apiclient.errors import HttpError
+ try:
+ from googleapiclient.errors import HttpError
+ except:
+ from apiclient.errors import HttpError
job_id = uuid.uuid4().hex
rows = []
@@ -474,7 +487,10 @@ def load_data(self, dataframe, dataset_id, table_id, chunksize):
self._print("\n")
def verify_schema(self, dataset_id, table_id, schema):
- from apiclient.errors import HttpError
+ try:
+ from googleapiclient.errors import HttpError
+ except:
+ from apiclient.errors import HttpError
try:
return (self.service.tables().get(
@@ -765,7 +781,10 @@ class _Table(GbqConnector):
def __init__(self, project_id, dataset_id, reauth=False, verbose=False,
private_key=None):
- from apiclient.errors import HttpError
+ try:
+ from googleapiclient.errors import HttpError
+ except:
+ from apiclient.errors import HttpError
self.http_error = HttpError
self.dataset_id = dataset_id
super(_Table, self).__init__(project_id, reauth, verbose, private_key)
@@ -865,7 +884,10 @@ class _Dataset(GbqConnector):
def __init__(self, project_id, reauth=False, verbose=False,
private_key=None):
- from apiclient.errors import HttpError
+ try:
+ from googleapiclient.errors import HttpError
+ except:
+ from apiclient.errors import HttpError
self.http_error = HttpError
super(_Dataset, self).__init__(project_id, reauth, verbose,
private_key)
diff --git a/pandas/io/tests/test_gbq.py b/pandas/io/tests/test_gbq.py
index 5cb681f4d2e7d..278c5d7215624 100644
--- a/pandas/io/tests/test_gbq.py
+++ b/pandas/io/tests/test_gbq.py
@@ -73,8 +73,12 @@ def _test_imports():
if _SETUPTOOLS_INSTALLED:
try:
- from apiclient.discovery import build # noqa
- from apiclient.errors import HttpError # noqa
+ try:
+ from googleapiclient.discovery import build # noqa
+ from googleapiclient.errors import HttpError # noqa
+ except:
+ from apiclient.discovery import build # noqa
+ from apiclient.errors import HttpError # noqa
from oauth2client.client import OAuth2WebServerFlow # noqa
from oauth2client.client import AccessTokenRefreshError # noqa
@@ -280,6 +284,17 @@ class GBQUnitTests(tm.TestCase):
def setUp(self):
test_requirements()
+ def test_import_google_api_python_client(self):
+ if compat.PY2:
+ with tm.assertRaises(ImportError):
+ from googleapiclient.discovery import build # noqa
+ from googleapiclient.errors import HttpError # noqa
+ from apiclient.discovery import build # noqa
+ from apiclient.errors import HttpError # noqa
+ else:
+ from googleapiclient.discovery import build # noqa
+ from googleapiclient.errors import HttpError # noqa
+
def test_should_return_bigquery_integers_as_python_floats(self):
result = gbq._parse_entry(1, 'INTEGER')
tm.assert_equal(result, float(1))
| - [ x ] closes #13454
- [ x ] tests added / passed
- [ x ] passes `git diff upstream/master | flake8 --diff`
- [ x ] whatsnew entry
All gbq tests pass locally
```
tony@tonypc:~/parthea-pandas/pandas/io/tests$ nosetests test_gbq.py -v
test_read_gbq_with_corrupted_private_key_json_should_fail (pandas.io.tests.test_gbq.GBQUnitTests) ... ok
test_read_gbq_with_empty_private_key_file_should_fail (pandas.io.tests.test_gbq.GBQUnitTests) ... ok
test_read_gbq_with_empty_private_key_json_should_fail (pandas.io.tests.test_gbq.GBQUnitTests) ... ok
test_read_gbq_with_invalid_private_key_json_should_fail (pandas.io.tests.test_gbq.GBQUnitTests) ... ok
test_read_gbq_with_no_project_id_given_should_fail (pandas.io.tests.test_gbq.GBQUnitTests) ... ok
test_read_gbq_with_private_key_json_wrong_types_should_fail (pandas.io.tests.test_gbq.GBQUnitTests) ... ok
test_should_return_bigquery_booleans_as_python_booleans (pandas.io.tests.test_gbq.GBQUnitTests) ... ok
test_should_return_bigquery_floats_as_python_floats (pandas.io.tests.test_gbq.GBQUnitTests) ... ok
test_should_return_bigquery_integers_as_python_floats (pandas.io.tests.test_gbq.GBQUnitTests) ... ok
test_should_return_bigquery_strings_as_python_strings (pandas.io.tests.test_gbq.GBQUnitTests) ... ok
test_should_return_bigquery_timestamps_as_numpy_datetime (pandas.io.tests.test_gbq.GBQUnitTests) ... ok
test_that_parse_data_works_properly (pandas.io.tests.test_gbq.GBQUnitTests) ... ok
test_to_gbq_should_fail_if_invalid_table_name_passed (pandas.io.tests.test_gbq.GBQUnitTests) ... ok
test_to_gbq_with_no_project_id_given_should_fail (pandas.io.tests.test_gbq.GBQUnitTests) ... ok
test_should_be_able_to_get_a_bigquery_service (pandas.io.tests.test_gbq.TestGBQConnectorIntegration) ... ok
test_should_be_able_to_get_results_from_query (pandas.io.tests.test_gbq.TestGBQConnectorIntegration) ... ok
test_should_be_able_to_get_schema_from_query (pandas.io.tests.test_gbq.TestGBQConnectorIntegration) ... ok
test_should_be_able_to_get_valid_credentials (pandas.io.tests.test_gbq.TestGBQConnectorIntegration) ... ok
test_should_be_able_to_make_a_connector (pandas.io.tests.test_gbq.TestGBQConnectorIntegration) ... ok
test_should_be_able_to_get_a_bigquery_service (pandas.io.tests.test_gbq.TestGBQConnectorServiceAccountKeyContentsIntegration) ... ok
test_should_be_able_to_get_results_from_query (pandas.io.tests.test_gbq.TestGBQConnectorServiceAccountKeyContentsIntegration) ... ok
test_should_be_able_to_get_schema_from_query (pandas.io.tests.test_gbq.TestGBQConnectorServiceAccountKeyContentsIntegration) ... ok
test_should_be_able_to_get_valid_credentials (pandas.io.tests.test_gbq.TestGBQConnectorServiceAccountKeyContentsIntegration) ... ok
test_should_be_able_to_make_a_connector (pandas.io.tests.test_gbq.TestGBQConnectorServiceAccountKeyContentsIntegration) ... ok
test_should_be_able_to_get_a_bigquery_service (pandas.io.tests.test_gbq.TestGBQConnectorServiceAccountKeyPathIntegration) ... ok
test_should_be_able_to_get_results_from_query (pandas.io.tests.test_gbq.TestGBQConnectorServiceAccountKeyPathIntegration) ... ok
test_should_be_able_to_get_schema_from_query (pandas.io.tests.test_gbq.TestGBQConnectorServiceAccountKeyPathIntegration) ... ok
test_should_be_able_to_get_valid_credentials (pandas.io.tests.test_gbq.TestGBQConnectorServiceAccountKeyPathIntegration) ... ok
test_should_be_able_to_make_a_connector (pandas.io.tests.test_gbq.TestGBQConnectorServiceAccountKeyPathIntegration) ... ok
test_bad_project_id (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_bad_table_name (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_column_order (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_column_order_plus_index (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_download_dataset_larger_than_200k_rows (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_index_column (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_malformed_query (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_should_properly_handle_arbitrary_timestamp (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_should_properly_handle_empty_strings (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_should_properly_handle_false_boolean (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_should_properly_handle_null_boolean (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_should_properly_handle_null_floats (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_should_properly_handle_null_integers (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_should_properly_handle_null_strings (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_should_properly_handle_null_timestamp (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_should_properly_handle_timestamp_unix_epoch (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_should_properly_handle_true_boolean (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_should_properly_handle_valid_floats (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_should_properly_handle_valid_integers (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_should_properly_handle_valid_strings (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_should_read_as_service_account_with_key_contents (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_should_read_as_service_account_with_key_path (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_unicode_string_conversion_and_normalization (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_zero_rows (pandas.io.tests.test_gbq.TestReadGBQIntegration) ... ok
test_create_dataset (pandas.io.tests.test_gbq.TestToGBQIntegration) ... ok
test_create_table (pandas.io.tests.test_gbq.TestToGBQIntegration) ... ok
test_dataset_does_not_exist (pandas.io.tests.test_gbq.TestToGBQIntegration) ... ok
test_dataset_exists (pandas.io.tests.test_gbq.TestToGBQIntegration) ... ok
test_delete_dataset (pandas.io.tests.test_gbq.TestToGBQIntegration) ... ok
test_delete_table (pandas.io.tests.test_gbq.TestToGBQIntegration) ... ok
test_generate_schema (pandas.io.tests.test_gbq.TestToGBQIntegration) ... ok
test_google_upload_errors_should_raise_exception (pandas.io.tests.test_gbq.TestToGBQIntegration) ... ok
test_list_dataset (pandas.io.tests.test_gbq.TestToGBQIntegration) ... ok
test_list_table (pandas.io.tests.test_gbq.TestToGBQIntegration) ... ok
test_list_table_zero_results (pandas.io.tests.test_gbq.TestToGBQIntegration) ... ok
test_table_does_not_exist (pandas.io.tests.test_gbq.TestToGBQIntegration) ... ok
test_upload_data (pandas.io.tests.test_gbq.TestToGBQIntegration) ... ok
test_upload_data_if_table_exists_append (pandas.io.tests.test_gbq.TestToGBQIntegration) ... ok
test_upload_data_if_table_exists_fail (pandas.io.tests.test_gbq.TestToGBQIntegration) ... ok
test_upload_data_if_table_exists_replace (pandas.io.tests.test_gbq.TestToGBQIntegration) ... ok
test_upload_data_as_service_account_with_key_contents (pandas.io.tests.test_gbq.TestToGBQIntegrationServiceAccountKeyContents) ... ok
test_upload_data_as_service_account_with_key_path (pandas.io.tests.test_gbq.TestToGBQIntegrationServiceAccountKeyPath) ... ok
pandas.io.tests.test_gbq.test_requirements ... ok
pandas.io.tests.test_gbq.test_generate_bq_schema_deprecated ... ok
----------------------------------------------------------------------
Ran 73 tests in 435.170s
OK
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/13458 | 2016-06-16T02:16:25Z | 2016-07-07T07:26:12Z | 2016-07-07T07:26:12Z | 2016-07-07T14:28:02Z |
BLD: use inline macro | diff --git a/pandas/src/klib/khash_python.h b/pandas/src/klib/khash_python.h
index 7684493d08855..a375a73b04c9e 100644
--- a/pandas/src/klib/khash_python.h
+++ b/pandas/src/klib/khash_python.h
@@ -13,7 +13,7 @@
// simple hash, viewing the double bytes as an int64 and using khash's default
// hash for 64 bit integers.
// GH 13436
-inline khint64_t asint64(double key) {
+khint64_t PANDAS_INLINE asint64(double key) {
return *(khint64_t *)(&key);
}
#define kh_float64_hash_func(key) (khint32_t)((asint64(key))>>33^(asint64(key))^(asint64(key))<<11)
| closes #13448
This built locally (on Windows) for py27 and py35.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13456 | 2016-06-15T22:39:24Z | 2016-06-16T12:13:47Z | null | 2016-06-17T23:24:59Z |
ERR: fix error message for to_datetime | diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index 6453e65ecdc81..7de62fbe71615 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -2320,7 +2320,7 @@ cpdef array_to_datetime(ndarray[object] values, errors='raise',
iresult[i] = NPY_NAT
continue
elif is_raise:
- raise ValueError("time data %r does match format specified" %
+ raise ValueError("time data %r doesn't match format specified" %
(val,))
else:
return values
| - [x] ~~closes #xxxx~~ There is no issue for this.
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
Fixes the error message when the argument to `to_datetime` doesn't match the format specified.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13450 | 2016-06-15T13:58:27Z | 2016-06-16T12:18:43Z | null | 2016-06-17T03:17:59Z |
DOC: Corrected Series.str.extract documentation error | diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 2f9f8ec936e78..ca8e701d0ce17 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -543,7 +543,7 @@ def str_extract(arr, pat, flags=0, expand=None):
each group. Any capture group names in regular expression pat will
be used for column names; otherwise capture group numbers will be
used. The dtype of each result column is always object, even when
- no match is found. If expand=True and pat has only one capture group,
+ no match is found. If expand=False and pat has only one capture group,
then return a Series (if subject is a Series) or Index (if subject
is an Index).
| Really small documentation fix, that rather confused me as I was first reading it (given the documentation contradicts itself). I believe this is the correct resolution.
Let me know if I need to actually compile/test anything.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13449 | 2016-06-15T13:15:00Z | 2016-06-15T13:25:13Z | 2016-06-15T13:25:13Z | 2016-06-15T13:35:01Z |
BUG: Rolling negative window issue fix #13383 | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 6bc152aad6b01..0f8ed0558b5f1 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -516,3 +516,4 @@ Bug Fixes
- Bug in ``Categorical.remove_unused_categories()`` changes ``.codes`` dtype to platform int (:issue:`13261`)
+- Bug in ``Series.rolling()`` that allowed negative window, but failed on aggregation (:issue:`13383`)
diff --git a/pandas/core/window.py b/pandas/core/window.py
index fbc56335aabd9..1e34d18fe3e54 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -321,6 +321,8 @@ def validate(self):
if isinstance(window, (list, tuple, np.ndarray)):
pass
elif com.is_integer(window):
+ if window < 0:
+ raise ValueError("window must be non-negative")
try:
import scipy.signal as sig
except ImportError:
@@ -850,6 +852,8 @@ def validate(self):
super(Rolling, self).validate()
if not com.is_integer(self.window):
raise ValueError("window must be an integer")
+ elif self.window < 0:
+ raise ValueError("window must be non-negative")
@Substitution(name='rolling')
@Appender(SelectionMixin._see_also_template)
diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py
index 2ec419221c6d8..3693ebdb12e2f 100644
--- a/pandas/tests/test_window.py
+++ b/pandas/tests/test_window.py
@@ -331,6 +331,11 @@ def test_constructor(self):
c(window=2, min_periods=1, center=True)
c(window=2, min_periods=1, center=False)
+ # GH 13383
+ c(0)
+ with self.assertRaises(ValueError):
+ c(-1)
+
# not valid
for w in [2., 'foo', np.array([2])]:
with self.assertRaises(ValueError):
@@ -340,6 +345,15 @@ def test_constructor(self):
with self.assertRaises(ValueError):
c(window=2, min_periods=1, center=w)
+ def test_constructor_with_win_type(self):
+ # GH 13383
+ tm._skip_if_no_scipy()
+ for o in [self.series, self.frame]:
+ c = o.rolling
+ c(0, win_type='boxcar')
+ with self.assertRaises(ValueError):
+ c(-1, win_type='boxcar')
+
def test_numpy_compat(self):
# see gh-12811
r = rwindow.Rolling(Series([2, 4, 6]), window=2)
| - [x] closes #13383
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
Added functionality in validate function of Rolling to ensure that window size is non-negative.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13441 | 2016-06-14T16:02:36Z | 2016-06-21T10:09:29Z | null | 2016-06-21T10:09:36Z |
BUG: alignment of CategoricalIndex | diff --git a/pandas/indexes/category.py b/pandas/indexes/category.py
index 3b7c660f5faa1..1496f3b69aa5a 100644
--- a/pandas/indexes/category.py
+++ b/pandas/indexes/category.py
@@ -8,6 +8,7 @@
deprecate_kwarg)
from pandas.core.config import get_option
from pandas.indexes.base import Index, _index_shared_docs
+from pandas.types.concat import union_categoricals
import pandas.core.base as base
import pandas.core.common as com
import pandas.core.missing as missing
@@ -575,6 +576,33 @@ def append(self, other):
codes = np.concatenate([c.codes for c in to_concat])
return self._create_from_codes(codes, name=name)
+ def _join_non_unique(self, other, how='left', return_indexers=False):
+ """
+ Must be overridden because np.putmask() does not work on Categorical.
+ """
+
+ from pandas.tools.merge import _get_join_indexers
+
+ left_idx, right_idx = _get_join_indexers([self.values],
+ [other._values], how=how,
+ sort=True)
+
+ left_idx = com._ensure_platform_int(left_idx)
+ right_idx = com._ensure_platform_int(right_idx)
+
+ take_left = left_idx != -1
+
+ join_index = union_categoricals([self.values[left_idx[take_left]],
+ other._values[right_idx[~take_left]]],
+ masks=[take_left, ~take_left])
+
+ join_index = self._wrap_joined_index(join_index, other)
+
+ if return_indexers:
+ return join_index, left_idx, right_idx
+ else:
+ return join_index
+
@classmethod
def _add_comparison_methods(cls):
""" add in comparison methods """
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index d6f7493bb25f9..56387d054dbdd 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -225,9 +225,8 @@ def test_copy_name(self):
s1 = Series(2, index=first)
s2 = Series(3, index=second[:-1])
- if not isinstance(index, CategoricalIndex): # See GH13365
- s3 = s1 * s2
- self.assertEqual(s3.index.name, 'mario')
+ s3 = s1 * s2
+ self.assertEqual(s3.index.name, 'mario')
def test_ensure_copied_data(self):
# Check the "copy" argument of each Index.__new__ is honoured
diff --git a/pandas/types/concat.py b/pandas/types/concat.py
index 53db9ddf79a5c..322c6820e4c0a 100644
--- a/pandas/types/concat.py
+++ b/pandas/types/concat.py
@@ -201,7 +201,7 @@ def convert_categorical(x):
return Categorical(concatted, rawcats)
-def union_categoricals(to_union):
+def union_categoricals(to_union, masks=None):
"""
Combine list-like of Categoricals, unioning categories. All
must have the same dtype, and none can be ordered.
@@ -211,6 +211,10 @@ def union_categoricals(to_union):
Parameters
----------
to_union : list-like of Categoricals
+ masks: list-like of boolean arrays, all of same shape
+ They indicate where to position the values: their shape will be the
+ shape of the returned array. If None, members of "to_union" will be
+ just concatenated.
Returns
-------
@@ -243,11 +247,17 @@ def union_categoricals(to_union):
unique_cats = cats.append([c.categories for c in to_union[1:]]).unique()
categories = Index(unique_cats)
- new_codes = []
- for c in to_union:
- indexer = categories.get_indexer(c.categories)
- new_codes.append(indexer.take(c.codes))
- codes = np.concatenate(new_codes)
+ if masks is None:
+ new_codes = []
+ for c in to_union:
+ indexer = categories.get_indexer(c.categories)
+ new_codes.append(indexer.take(c.codes))
+ codes = np.concatenate(new_codes)
+ else:
+ codes = np.empty(shape=masks[0].shape, dtype=first.codes.dtype)
+ for c, mask in zip(to_union, masks):
+ indexer = categories.get_indexer(c.categories)
+ codes[mask] = indexer.take(c.codes)
return Categorical(codes, categories=categories, ordered=False,
fastpath=True)
| - [x] closes #13365
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [ ] whatsnew entry
@jreback take this just as a wild guess of how to fix #13365 . If the approach makes sense, I will add tests, whatsnew and some more docs.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13440 | 2016-06-14T14:45:52Z | 2016-07-09T12:55:28Z | null | 2016-07-09T14:23:15Z |
PERF: float hash slow in py3 | diff --git a/asv_bench/benchmarks/groupby.py b/asv_bench/benchmarks/groupby.py
index 586bd00b091fe..0611a3564ff7a 100644
--- a/asv_bench/benchmarks/groupby.py
+++ b/asv_bench/benchmarks/groupby.py
@@ -379,15 +379,24 @@ def time_groupby_dt_timegrouper_size(self):
#----------------------------------------------------------------------
# groupby with a variable value for ngroups
-class groupby_ngroups_10000(object):
+class groupby_ngroups_int_10000(object):
goal_time = 0.2
+ dtype = 'int'
+ ngroups = 10000
def setup(self):
np.random.seed(1234)
- self.ngroups = 10000
- self.size = (self.ngroups * 2)
- self.rng = np.arange(self.ngroups)
- self.df = DataFrame(dict(timestamp=self.rng.take(np.random.randint(0, self.ngroups, size=self.size)), value=np.random.randint(0, self.size, size=self.size)))
+ size = self.ngroups * 2
+ rng = np.arange(self.ngroups)
+ ts = rng.take(np.random.randint(0, self.ngroups, size=size))
+ if self.dtype == 'int':
+ value = np.random.randint(0, size, size=size)
+ else:
+ value = np.concatenate([np.random.random(self.ngroups) * 0.1,
+ np.random.random(self.ngroups) * 10.0])
+
+ self.df = DataFrame({'timestamp': ts,
+ 'value': value})
def time_all(self):
self.df.groupby('value')['timestamp'].all()
@@ -482,109 +491,35 @@ def time_value_counts(self):
def time_var(self):
self.df.groupby('value')['timestamp'].var()
-
-class groupby_ngroups_100(object):
+class groupby_ngroups_int_100(groupby_ngroups_int_10000):
goal_time = 0.2
+ dtype = 'int'
+ ngroups = 100
- def setup(self):
- np.random.seed(1234)
- self.ngroups = 100
- self.size = (self.ngroups * 2)
- self.rng = np.arange(self.ngroups)
- self.df = DataFrame(dict(timestamp=self.rng.take(np.random.randint(0, self.ngroups, size=self.size)), value=np.random.randint(0, self.size, size=self.size)))
-
- def time_all(self):
- self.df.groupby('value')['timestamp'].all()
-
- def time_any(self):
- self.df.groupby('value')['timestamp'].any()
-
- def time_count(self):
- self.df.groupby('value')['timestamp'].count()
-
- def time_cumcount(self):
- self.df.groupby('value')['timestamp'].cumcount()
-
- def time_cummax(self):
- self.df.groupby('value')['timestamp'].cummax()
-
- def time_cummin(self):
- self.df.groupby('value')['timestamp'].cummin()
-
- def time_cumprod(self):
- self.df.groupby('value')['timestamp'].cumprod()
-
- def time_cumsum(self):
- self.df.groupby('value')['timestamp'].cumsum()
-
- def time_describe(self):
- self.df.groupby('value')['timestamp'].describe()
-
- def time_diff(self):
- self.df.groupby('value')['timestamp'].diff()
-
- def time_first(self):
- self.df.groupby('value')['timestamp'].first()
-
- def time_head(self):
- self.df.groupby('value')['timestamp'].head()
-
- def time_last(self):
- self.df.groupby('value')['timestamp'].last()
-
- def time_mad(self):
- self.df.groupby('value')['timestamp'].mad()
-
- def time_max(self):
- self.df.groupby('value')['timestamp'].max()
-
- def time_mean(self):
- self.df.groupby('value')['timestamp'].mean()
-
- def time_median(self):
- self.df.groupby('value')['timestamp'].median()
-
- def time_min(self):
- self.df.groupby('value')['timestamp'].min()
-
- def time_nunique(self):
- self.df.groupby('value')['timestamp'].nunique()
-
- def time_pct_change(self):
- self.df.groupby('value')['timestamp'].pct_change()
-
- def time_prod(self):
- self.df.groupby('value')['timestamp'].prod()
-
- def time_rank(self):
- self.df.groupby('value')['timestamp'].rank()
-
- def time_sem(self):
- self.df.groupby('value')['timestamp'].sem()
-
- def time_size(self):
- self.df.groupby('value')['timestamp'].size()
-
- def time_skew(self):
- self.df.groupby('value')['timestamp'].skew()
-
- def time_std(self):
- self.df.groupby('value')['timestamp'].std()
+class groupby_ngroups_float_100(groupby_ngroups_int_10000):
+ goal_time = 0.2
+ dtype = 'float'
+ ngroups = 100
- def time_sum(self):
- self.df.groupby('value')['timestamp'].sum()
+class groupby_ngroups_float_10000(groupby_ngroups_int_10000):
+ goal_time = 0.2
+ dtype = 'float'
+ ngroups = 10000
- def time_tail(self):
- self.df.groupby('value')['timestamp'].tail()
- def time_unique(self):
- self.df.groupby('value')['timestamp'].unique()
+class groupby_float32(object):
+ # GH 13335
+ goal_time = 0.2
- def time_value_counts(self):
- self.df.groupby('value')['timestamp'].value_counts()
+ def setup(self):
+ tmp1 = (np.random.random(10000) * 0.1).astype(np.float32)
+ tmp2 = (np.random.random(10000) * 10.0).astype(np.float32)
+ tmp = np.concatenate((tmp1, tmp2))
+ arr = np.repeat(tmp, 10)
+ self.df = DataFrame(dict(a=arr, b=arr))
- def time_var(self):
- self.df.groupby('value')['timestamp'].var()
+ def time_groupby_sum(self):
+ self.df.groupby(['a'])['b'].sum()
#----------------------------------------------------------------------
diff --git a/asv_bench/benchmarks/indexing.py b/asv_bench/benchmarks/indexing.py
index 32d80a7913234..53d37a8161f43 100644
--- a/asv_bench/benchmarks/indexing.py
+++ b/asv_bench/benchmarks/indexing.py
@@ -486,4 +486,17 @@ def setup(self):
self.midx = self.midx.take(np.random.permutation(np.arange(100000)))
def time_sort_level_zero(self):
- self.midx.sortlevel(0)
\ No newline at end of file
+ self.midx.sortlevel(0)
+
+class float_loc(object):
+ # GH 13166
+ goal_time = 0.2
+
+ def setup(self):
+ a = np.arange(100000)
+ self.ind = pd.Float64Index(a * 4.8000000418824129e-08)
+
+ def time_float_loc(self):
+ self.ind.get_loc(0)
+
+
diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 105194e504f45..0b8b7b56fd36b 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -307,7 +307,7 @@ Performance Improvements
- Improved performance of sparse arithmetic with ``BlockIndex`` when the number of blocks are large, though recommended to use ``IntIndex`` in such cases (:issue:`13082`)
- increased performance of ``DataFrame.quantile()`` as it now operates per-block (:issue:`11623`)
-
+- Improved performance of float64 hash table fixing some very slow indexing and groupby operations in python 3 (:issue:`13166`, :issue:`13334`)
- Improved performance of ``DataFrameGroupBy.transform`` (:issue:`12737`)
diff --git a/pandas/src/klib/khash_python.h b/pandas/src/klib/khash_python.h
index cdd94b5d8522f..7684493d08855 100644
--- a/pandas/src/klib/khash_python.h
+++ b/pandas/src/klib/khash_python.h
@@ -2,9 +2,21 @@
#include "khash.h"
-// kludge
-
-#define kh_float64_hash_func _Py_HashDouble
+// Previously we were using the built in cpython hash function for doubles
+// python 2.7 https://github.com/python/cpython/blob/2.7/Objects/object.c#L1021
+// python 3.5 https://github.com/python/cpython/blob/3.5/Python/pyhash.c#L85
+
+// The python 3 hash function has the invariant hash(x) == hash(int(x)) == hash(decimal(x))
+// and the size of hash may be different by platform / version (long in py2, Py_ssize_t in py3).
+// We don't need those invariants because types will be cast before hashing, and if Py_ssize_t
+// is 64 bits the truncation causes collission issues. Given all that, we use our own
+// simple hash, viewing the double bytes as an int64 and using khash's default
+// hash for 64 bit integers.
+// GH 13436
+inline khint64_t asint64(double key) {
+ return *(khint64_t *)(&key);
+}
+#define kh_float64_hash_func(key) (khint32_t)((asint64(key))>>33^(asint64(key))^(asint64(key))<<11)
#define kh_float64_hash_equal(a, b) ((a) == (b) || ((b) != (b) && (a) != (a)))
#define KHASH_MAP_INIT_FLOAT64(name, khval_t) \
| closes #13166, closes #13335
Using exactly the approach suggested by @ruoyu0088
significant changes in asv below
```
before after ratio
- 8.88s 78.12ms 0.01 indexing.float_loc.time_float_loc
- 13.11s 78.12ms 0.01 groupby.groupby_float32.time_groupby_sum
```
Factor of 10 smaller benches
```
before after ratio
- 171.88ms 43.29ms 0.25 indexing.float_loc.time_float_loc
- 1.42s 11.23ms 0.01 groupby.groupby_float32.time_groupby_sum
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/13436 | 2016-06-14T03:31:30Z | 2016-06-15T01:48:21Z | null | 2016-08-06T21:24:14Z |
BUG: df.pivot_table: margins_name ignored when aggfunc is a list | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 105194e504f45..c4670b8c03e83 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -383,3 +383,4 @@ Bug Fixes
- Bug in ``Categorical.remove_unused_categories()`` changes ``.codes`` dtype to platform int (:issue:`13261`)
+- Bug in ``pd.pivot_table()`` where ``margins_name`` is ignored when ``aggfunc`` is a list (:issue:`13354`)
diff --git a/pandas/tools/pivot.py b/pandas/tools/pivot.py
index a4e6cc404a457..e1405bc9e6add 100644
--- a/pandas/tools/pivot.py
+++ b/pandas/tools/pivot.py
@@ -86,7 +86,7 @@ def pivot_table(data, values=None, index=None, columns=None, aggfunc='mean',
table = pivot_table(data, values=values, index=index,
columns=columns,
fill_value=fill_value, aggfunc=func,
- margins=margins)
+ margins=margins, margins_name=margins_name)
pieces.append(table)
keys.append(func.__name__)
return concat(pieces, keys=keys, axis=1)
diff --git a/pandas/tools/tests/test_pivot.py b/pandas/tools/tests/test_pivot.py
index 82feaae13f771..7ec4018d301af 100644
--- a/pandas/tools/tests/test_pivot.py
+++ b/pandas/tools/tests/test_pivot.py
@@ -779,6 +779,28 @@ def test_pivot_table_with_iterator_values(self):
)
tm.assert_frame_equal(pivot_values_gen, pivot_values_list)
+ def test_pivot_table_margins_name_with_aggfunc_list(self):
+ # GH 13354
+ margins_name = 'Weekly'
+ costs = pd.DataFrame(
+ {'item': ['bacon', 'cheese', 'bacon', 'cheese'],
+ 'cost': [2.5, 4.5, 3.2, 3.3],
+ 'day': ['M', 'M', 'T', 'T']}
+ )
+ table = costs.pivot_table(
+ index="item", columns="day", margins=True,
+ margins_name=margins_name, aggfunc=[np.mean, max]
+ )
+ ix = pd.Index(
+ ['bacon', 'cheese', margins_name], dtype='object', name='item'
+ )
+ tups = [('mean', 'cost', 'M'), ('mean', 'cost', 'T'),
+ ('mean', 'cost', margins_name), ('max', 'cost', 'M'),
+ ('max', 'cost', 'T'), ('max', 'cost', margins_name)]
+ cols = pd.MultiIndex.from_tuples(tups, names=[None, None, 'day'])
+ expected = pd.DataFrame(table.values, index=ix, columns=cols)
+ tm.assert_frame_equal(table, expected)
+
class TestCrosstab(tm.TestCase):
| - [ ] closes #13354
- [ ] tests added / passed
- [ ] passes `git diff upstream/master | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/13435 | 2016-06-13T23:28:16Z | 2016-06-18T15:25:16Z | null | 2016-06-18T15:25:37Z |
ENH: PeriodIndex now accepts pd.NaT | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index e469cbf79b31a..45a9b75556bf6 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -288,6 +288,7 @@ Other API changes
- ``Float64Index.astype(int)`` will now raise ``ValueError`` if ``Float64Index`` contains ``NaN`` values (:issue:`13149`)
- ``TimedeltaIndex.astype(int)`` and ``DatetimeIndex.astype(int)`` will now return ``Int64Index`` instead of ``np.array`` (:issue:`13209`)
- ``.filter()`` enforces mutual exclusion of the keyword arguments. (:issue:`12399`)
+- ``PeridIndex`` can now accept ``list`` and ``array`` which contains ``pd.NaT`` (:issue:`13430`)
.. _whatsnew_0182.deprecations:
diff --git a/pandas/src/period.pyx b/pandas/src/period.pyx
index 858aa58df8d7d..aca0d0dbc107b 100644
--- a/pandas/src/period.pyx
+++ b/pandas/src/period.pyx
@@ -24,6 +24,7 @@ cimport cython
from datetime cimport *
cimport util
cimport lib
+from lib cimport is_null_datetimelike
import lib
from pandas import tslib
from tslib import Timedelta, Timestamp, iNaT, NaT
@@ -458,13 +459,39 @@ def extract_ordinals(ndarray[object] values, freq):
for i in range(n):
p = values[i]
- ordinals[i] = p.ordinal
- if p.freqstr != freqstr:
- msg = _DIFFERENT_FREQ_INDEX.format(freqstr, p.freqstr)
- raise IncompatibleFrequency(msg)
+
+ if is_null_datetimelike(p):
+ ordinals[i] = tslib.iNaT
+ else:
+ try:
+ ordinals[i] = p.ordinal
+
+ if p.freqstr != freqstr:
+ msg = _DIFFERENT_FREQ_INDEX.format(freqstr, p.freqstr)
+ raise IncompatibleFrequency(msg)
+
+ except AttributeError:
+ p = Period(p, freq=freq)
+ ordinals[i] = p.ordinal
return ordinals
+
+def extract_freq(ndarray[object] values):
+ cdef:
+ Py_ssize_t i, n = len(values)
+ object p
+
+ for i in range(n):
+ p = values[i]
+ try:
+ return p.freq
+ except AttributeError:
+ pass
+
+ raise ValueError('freq not specified and cannot be inferred')
+
+
cpdef resolution(ndarray[int64_t] stamps, tz=None):
cdef:
Py_ssize_t i, n = len(stamps)
@@ -719,7 +746,7 @@ cdef class Period(object):
converted = other.asfreq(freq)
ordinal = converted.ordinal
- elif lib.is_null_datetimelike(value) or value in tslib._nat_strings:
+ elif is_null_datetimelike(value) or value in tslib._nat_strings:
ordinal = tslib.iNaT
if freq is None:
raise ValueError("If value is NaT, freq cannot be None "
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index 8a3ac1f080c90..750e7a5553ef6 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -40,14 +40,6 @@ def f(self):
return property(f)
-def _get_ordinals(data, freq):
- f = lambda x: Period(x, freq=freq).ordinal
- if isinstance(data[0], Period):
- return period.extract_ordinals(data, freq)
- else:
- return lib.map_infer(data, f)
-
-
def dt64arr_to_periodarr(data, freq, tz):
if data.dtype != np.dtype('M8[ns]'):
raise ValueError('Wrong dtype: %s' % data.dtype)
@@ -235,14 +227,9 @@ def _from_arraylike(cls, data, freq, tz):
except (TypeError, ValueError):
data = com._ensure_object(data)
- if freq is None and len(data) > 0:
- freq = getattr(data[0], 'freq', None)
-
if freq is None:
- raise ValueError('freq not specified and cannot be '
- 'inferred from first element')
-
- data = _get_ordinals(data, freq)
+ freq = period.extract_freq(data)
+ data = period.extract_ordinals(data, freq)
else:
if isinstance(data, PeriodIndex):
if freq is None or freq == data.freq:
@@ -254,12 +241,15 @@ def _from_arraylike(cls, data, freq, tz):
data = period.period_asfreq_arr(data.values,
base1, base2, 1)
else:
- if freq is None and len(data) > 0:
- freq = getattr(data[0], 'freq', None)
+
+ if freq is None and com.is_object_dtype(data):
+ # must contain Period instance and thus extract ordinals
+ freq = period.extract_freq(data)
+ data = period.extract_ordinals(data, freq)
if freq is None:
- raise ValueError('freq not specified and cannot be '
- 'inferred from first element')
+ msg = 'freq not specified and cannot be inferred'
+ raise ValueError(msg)
if data.dtype != np.int64:
if np.issubdtype(data.dtype, np.datetime64):
@@ -269,7 +259,7 @@ def _from_arraylike(cls, data, freq, tz):
data = com._ensure_int64(data)
except (TypeError, ValueError):
data = com._ensure_object(data)
- data = _get_ordinals(data, freq)
+ data = period.extract_ordinals(data, freq)
return data, freq
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index de23306c80b71..807fb86b1b4da 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -1742,6 +1742,84 @@ def test_constructor_datetime64arr(self):
self.assertRaises(ValueError, PeriodIndex, vals, freq='D')
+ def test_constructor_empty(self):
+ idx = pd.PeriodIndex([], freq='M')
+ tm.assertIsInstance(idx, PeriodIndex)
+ self.assertEqual(len(idx), 0)
+ self.assertEqual(idx.freq, 'M')
+
+ with tm.assertRaisesRegexp(ValueError, 'freq not specified'):
+ pd.PeriodIndex([])
+
+ def test_constructor_pi_nat(self):
+ idx = PeriodIndex([Period('2011-01', freq='M'), pd.NaT,
+ Period('2011-01', freq='M')])
+ exp = PeriodIndex(['2011-01', 'NaT', '2011-01'], freq='M')
+ tm.assert_index_equal(idx, exp)
+
+ idx = PeriodIndex(np.array([Period('2011-01', freq='M'), pd.NaT,
+ Period('2011-01', freq='M')]))
+ tm.assert_index_equal(idx, exp)
+
+ idx = PeriodIndex([pd.NaT, pd.NaT, Period('2011-01', freq='M'),
+ Period('2011-01', freq='M')])
+ exp = PeriodIndex(['NaT', 'NaT', '2011-01', '2011-01'], freq='M')
+ tm.assert_index_equal(idx, exp)
+
+ idx = PeriodIndex(np.array([pd.NaT, pd.NaT,
+ Period('2011-01', freq='M'),
+ Period('2011-01', freq='M')]))
+ tm.assert_index_equal(idx, exp)
+
+ idx = PeriodIndex([pd.NaT, pd.NaT, '2011-01', '2011-01'], freq='M')
+ tm.assert_index_equal(idx, exp)
+
+ with tm.assertRaisesRegexp(ValueError, 'freq not specified'):
+ PeriodIndex([pd.NaT, pd.NaT])
+
+ with tm.assertRaisesRegexp(ValueError, 'freq not specified'):
+ PeriodIndex(np.array([pd.NaT, pd.NaT]))
+
+ with tm.assertRaisesRegexp(ValueError, 'freq not specified'):
+ PeriodIndex(['NaT', 'NaT'])
+
+ with tm.assertRaisesRegexp(ValueError, 'freq not specified'):
+ PeriodIndex(np.array(['NaT', 'NaT']))
+
+ def test_constructor_incompat_freq(self):
+ msg = "Input has different freq=D from PeriodIndex\\(freq=M\\)"
+
+ with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg):
+ PeriodIndex([Period('2011-01', freq='M'), pd.NaT,
+ Period('2011-01', freq='D')])
+
+ with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg):
+ PeriodIndex(np.array([Period('2011-01', freq='M'), pd.NaT,
+ Period('2011-01', freq='D')]))
+
+ # first element is pd.NaT
+ with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg):
+ PeriodIndex([pd.NaT, Period('2011-01', freq='M'),
+ Period('2011-01', freq='D')])
+
+ with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg):
+ PeriodIndex(np.array([pd.NaT, Period('2011-01', freq='M'),
+ Period('2011-01', freq='D')]))
+
+ def test_constructor_mixed(self):
+ idx = PeriodIndex(['2011-01', pd.NaT, Period('2011-01', freq='M')])
+ exp = PeriodIndex(['2011-01', 'NaT', '2011-01'], freq='M')
+ tm.assert_index_equal(idx, exp)
+
+ idx = PeriodIndex(['NaT', pd.NaT, Period('2011-01', freq='M')])
+ exp = PeriodIndex(['NaT', 'NaT', '2011-01'], freq='M')
+ tm.assert_index_equal(idx, exp)
+
+ idx = PeriodIndex([Period('2011-01-01', freq='D'), pd.NaT,
+ '2012-01-01'])
+ exp = PeriodIndex(['2011-01-01', 'NaT', '2012-01-01'], freq='D')
+ tm.assert_index_equal(idx, exp)
+
def test_constructor_simple_new(self):
idx = period_range('2007-01', name='p', periods=2, freq='M')
result = idx._simple_new(idx, 'p', freq=idx.freq)
| - [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
Related to #12759. `PeirodIndex` can be created from list/array which contains `pd.NaT`.
currently it raises:
```
pd.PeriodIndex([pd.NaT])
# ValueError: freq not specified and cannot be inferred from first element
pd.PeriodIndex([pd.Period('2011-01', freq='M'), pd.NaT])
# AttributeError: 'NaTType' object has no attribute 'ordinal'
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/13430 | 2016-06-12T12:18:24Z | 2016-06-14T21:26:02Z | null | 2016-06-14T22:35:59Z |
BUG: in _nsorted for frame with duplicated values index | - [x] closes #13412
- [ ] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/13428 | 2016-06-12T04:11:50Z | 2016-10-28T15:59:24Z | null | 2016-10-28T15:59:24Z |
|
BUG: categorical unpickle to use _coerce_indexer_dtype | diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index fa3d13c174245..6dba41a746e19 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -999,11 +999,12 @@ def __setstate__(self, state):
raise Exception('invalid pickle state')
# Provide compatibility with pre-0.15.0 Categoricals.
- if '_codes' not in state and 'labels' in state:
- state['_codes'] = state.pop('labels').astype(np.int8)
if '_categories' not in state and '_levels' in state:
state['_categories'] = self._validate_categories(state.pop(
'_levels'))
+ if '_codes' not in state and 'labels' in state:
+ state['_codes'] = _coerce_indexer_dtype(state.pop('labels'),
+ state['_categories'])
# 0.16.0 ordered change
if '_ordered' not in state:
diff --git a/pandas/io/tests/data/legacy_pickle/0.14.1/0.14.1_x86_64_darwin_2.7.12.pickle b/pandas/io/tests/data/legacy_pickle/0.14.1/0.14.1_x86_64_darwin_2.7.12.pickle
new file mode 100644
index 0000000000000..917ad2b0ff1a3
Binary files /dev/null and b/pandas/io/tests/data/legacy_pickle/0.14.1/0.14.1_x86_64_darwin_2.7.12.pickle differ
diff --git a/pandas/io/tests/data/legacy_pickle/0.15.0/0.15.0_x86_64_darwin_2.7.12.pickle b/pandas/io/tests/data/legacy_pickle/0.15.0/0.15.0_x86_64_darwin_2.7.12.pickle
new file mode 100644
index 0000000000000..c7a745cf9b458
Binary files /dev/null and b/pandas/io/tests/data/legacy_pickle/0.15.0/0.15.0_x86_64_darwin_2.7.12.pickle differ
diff --git a/pandas/io/tests/data/legacy_pickle/0.18.1/0.18.1_x86_64_darwin_2.7.12.pickle b/pandas/io/tests/data/legacy_pickle/0.18.1/0.18.1_x86_64_darwin_2.7.12.pickle
new file mode 100644
index 0000000000000..5ee1f88c93a34
Binary files /dev/null and b/pandas/io/tests/data/legacy_pickle/0.18.1/0.18.1_x86_64_darwin_2.7.12.pickle differ
diff --git a/pandas/io/tests/generate_legacy_storage_files.py b/pandas/io/tests/generate_legacy_storage_files.py
index bfa8ff6d30a9c..25fd86d899c08 100644
--- a/pandas/io/tests/generate_legacy_storage_files.py
+++ b/pandas/io/tests/generate_legacy_storage_files.py
@@ -80,6 +80,7 @@ def create_data():
[u'one', u'two', u'one', u'two', u'one',
u'two', u'one', u'two']])),
names=[u'first', u'second']))
+
series = dict(float=Series(data[u'A']),
int=Series(data[u'B']),
mixed=Series(data[u'E']),
@@ -135,6 +136,10 @@ def create_data():
items=[u'A', u'B', u'A']),
mixed_dup=mixed_dup_panel)
+ cat = dict(int8=Categorical(list('abcdefg')),
+ int16=Categorical(np.arange(1000)),
+ int32=Categorical(np.arange(10000)))
+
return dict(series=series,
frame=frame,
panel=panel,
@@ -143,7 +148,8 @@ def create_data():
mi=mi,
sp_series=dict(float=_create_sp_series(),
ts=_create_sp_tsseries()),
- sp_frame=dict(float=_create_sp_frame()))
+ sp_frame=dict(float=_create_sp_frame()),
+ cat=cat)
def create_pickle_data():
diff --git a/pandas/io/tests/test_pickle.py b/pandas/io/tests/test_pickle.py
index c12d6e02e3a2e..e337ad4dcfed2 100644
--- a/pandas/io/tests/test_pickle.py
+++ b/pandas/io/tests/test_pickle.py
@@ -109,8 +109,12 @@ def compare_series_dt_tz(self, result, expected, typ, version):
tm.assert_series_equal(result, expected)
def compare_series_cat(self, result, expected, typ, version):
- # Categorical.ordered is changed in < 0.16.0
- if LooseVersion(version) < '0.16.0':
+ # Categorical dtype is added in 0.15.0
+ # ordered is changed in 0.16.0
+ if LooseVersion(version) < '0.15.0':
+ tm.assert_series_equal(result, expected, check_dtype=False,
+ check_categorical=False)
+ elif LooseVersion(version) < '0.16.0':
tm.assert_series_equal(result, expected, check_categorical=False)
else:
tm.assert_series_equal(result, expected)
@@ -125,8 +129,12 @@ def compare_frame_dt_mixed_tzs(self, result, expected, typ, version):
tm.assert_frame_equal(result, expected)
def compare_frame_cat_onecol(self, result, expected, typ, version):
- # Categorical.ordered is changed in < 0.16.0
- if LooseVersion(version) < '0.16.0':
+ # Categorical dtype is added in 0.15.0
+ # ordered is changed in 0.16.0
+ if LooseVersion(version) < '0.15.0':
+ tm.assert_frame_equal(result, expected, check_dtype=False,
+ check_categorical=False)
+ elif LooseVersion(version) < '0.16.0':
tm.assert_frame_equal(result, expected, check_categorical=False)
else:
tm.assert_frame_equal(result, expected)
| - [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
follow up for #13080 to use `_coerce_indexer_dtype`.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13426 | 2016-06-11T10:01:59Z | 2016-07-05T10:42:47Z | null | 2016-07-05T10:44:15Z |
ENH: add downcast to pd.to_numeric | diff --git a/asv_bench/benchmarks/inference.py b/asv_bench/benchmarks/inference.py
index 3fceed087facb..6809c351beade 100644
--- a/asv_bench/benchmarks/inference.py
+++ b/asv_bench/benchmarks/inference.py
@@ -135,4 +135,23 @@ def setup(self):
self.df_timedelta64 = DataFrame(dict(A=(self.df_datetime64['A'] - self.df_datetime64['B']), B=self.df_datetime64['B']))
def time_dtype_infer_uint32(self):
- (self.df_uint32['A'] + self.df_uint32['B'])
\ No newline at end of file
+ (self.df_uint32['A'] + self.df_uint32['B'])
+
+
+class to_numeric(object):
+ N = 500000
+
+ param_names = ['data', 'downcast']
+ params = [
+ [(['1'] * N / 2) + ([2] * N / 2),
+ (['-1'] * N / 2) + ([2] * N / 2),
+ np.repeat(np.array('1970-01-01', '1970-01-02',
+ dtype='datetime64[D]'), N),
+ (['1.1'] * N / 2) + ([2] * N / 2),
+ ([1] * N / 2) + ([2] * N / 2),
+ np.repeat(np.int32(1), N)],
+ [None, 'integer', 'signed', 'unsigned', 'float'],
+ ]
+
+ def time_to_numeric(self, data, downcast):
+ pd.to_numeric(data, downcast=downcast)
diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index 8145e9536a82a..203ff6a2ef2f6 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -1754,39 +1754,93 @@ Convert a subset of columns to a specified type using :meth:`~DataFrame.astype`
object conversion
~~~~~~~~~~~~~~~~~
-:meth:`~DataFrame.convert_objects` is a method to try to force conversion of types from the ``object`` dtype to other types.
-To force conversion of specific types that are *number like*, e.g. could be a string that represents a number,
-pass ``convert_numeric=True``. This will force strings and numbers alike to be numbers if possible, otherwise
-they will be set to ``np.nan``.
+pandas offers various functions to try to force conversion of types from the ``object`` dtype to other types.
+The following functions are available for one dimensional object arrays or scalars:
+
+ 1) :meth:`~pandas.to_datetime` (conversion to datetime objects)
+
+ .. ipython:: python
+
+ import datetime
+ m = ['2016-07-09', datetime.datetime(2016, 3, 2)]
+ pd.to_datetime(m)
+
+ 2) :meth:`~pandas.to_numeric` (conversion to numeric dtypes)
+
+ .. ipython:: python
+
+ m = ['1.1', 2, 3]
+ pd.to_numeric(m)
+
+ 3) :meth:`~pandas.to_timedelta` (conversion to timedelta objects)
+
+ .. ipython:: python
+
+ m = ['5us', pd.Timedelta('1day')]
+ pd.to_timedelta(m)
+
+To force a conversion, we can pass in an ``errors`` argument, which specifies how pandas should deal with elements
+that cannot be converted to desired dtype or object. By default, ``errors='raise'``, meaning that any errors encountered
+will be raised during the conversion process. However, if ``errors='coerce'``, these errors will be ignored and pandas
+will convert problematic elements to ``pd.NaT`` (for datetime and timedelta) or ``np.nan`` (for numeric). This might be
+useful if you are reading in data which is mostly of the desired dtype (e.g. numeric, datetime), but occasionally has
+non-conforming elements intermixed that you want to represent as missing:
.. ipython:: python
- :okwarning:
- df3['D'] = '1.'
- df3['E'] = '1'
- df3.convert_objects(convert_numeric=True).dtypes
+ import datetime
+ m = ['apple', datetime.datetime(2016, 3, 2)]
+ pd.to_datetime(m, errors='coerce')
- # same, but specific dtype conversion
- df3['D'] = df3['D'].astype('float16')
- df3['E'] = df3['E'].astype('int32')
- df3.dtypes
+ m = ['apple', 2, 3]
+ pd.to_numeric(m, errors='coerce')
+
+ m = ['apple', pd.Timedelta('1day')]
+ pd.to_timedelta(m, errors='coerce')
-To force conversion to ``datetime64[ns]``, pass ``convert_dates='coerce'``.
-This will convert any datetime-like object to dates, forcing other values to ``NaT``.
-This might be useful if you are reading in data which is mostly dates,
-but occasionally has non-dates intermixed and you want to represent as missing.
+The ``errors`` parameter has a third option of ``errors='ignore'``, which will simply return the passed in data if it
+encounters any errors with the conversion to a desired data type:
.. ipython:: python
- import datetime
- s = pd.Series([datetime.datetime(2001,1,1,0,0),
- 'foo', 1.0, 1, pd.Timestamp('20010104'),
- '20010105'], dtype='O')
- s
- pd.to_datetime(s, errors='coerce')
+ import datetime
+ m = ['apple', datetime.datetime(2016, 3, 2)]
+ pd.to_datetime(m, errors='ignore')
+
+ m = ['apple', 2, 3]
+ pd.to_numeric(m, errors='ignore')
+
+ m = ['apple', pd.Timedelta('1day')]
+ pd.to_timedelta(m, errors='ignore')
+
+In addition to object conversion, :meth:`~pandas.to_numeric` provides another argument `downcast`, which gives the
+option of downcasting the newly (or already) numeric data to a smaller dtype, which can conserve memory:
+
+.. ipython:: python
+
+ m = ['1', 2, 3]
+ pd.to_numeric(m, downcast='integer') # smallest signed int dtype
+ pd.to_numeric(m, downcast='signed') # same as 'integer'
+ pd.to_numeric(m, downcast='unsigned') # smallest unsigned int dtype
+ pd.to_numeric(m, downcast='float') # smallest float dtype
+
+As these methods apply only to one-dimensional arrays, they cannot be used directly on multi-dimensional objects such
+as DataFrames. However, with :meth:`~pandas.DataFrame.apply`, we can "apply" the function over all elements:
-In addition, :meth:`~DataFrame.convert_objects` will attempt the *soft* conversion of any *object* dtypes, meaning that if all
-the objects in a Series are of the same type, the Series will have that dtype.
+.. ipython:: python
+
+ import datetime
+ df = pd.DataFrame([['2016-07-09', datetime.datetime(2016, 3, 2)]] * 2, dtype='O')
+ df
+ df.apply(pd.to_datetime)
+
+ df = pd.DataFrame([['1.1', 2, 3]] * 2, dtype='O')
+ df
+ df.apply(pd.to_numeric)
+
+ df = pd.DataFrame([['5us', pd.Timedelta('1day')]] * 2, dtype='O')
+ df
+ df.apply(pd.to_timedelta)
gotchas
~~~~~~~
diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt
index 657de7ec26efc..3a6a0b6213a9c 100644
--- a/doc/source/whatsnew/v0.19.0.txt
+++ b/doc/source/whatsnew/v0.19.0.txt
@@ -186,6 +186,13 @@ Other enhancements
^^^^^^^^^^^^^^^^^^
- The ``.tz_localize()`` method of ``DatetimeIndex`` and ``Timestamp`` has gained the ``errors`` keyword, so you can potentially coerce nonexistent timestamps to ``NaT``. The default behaviour remains to raising a ``NonExistentTimeError`` (:issue:`13057`)
+- ``pd.to_numeric()`` now accepts a ``downcast`` parameter, which will downcast the data if possible to smallest specified numerical dtype (:issue:`13352`)
+
+ .. ipython:: python
+
+ s = ['1', 2, 3]
+ pd.to_numeric(s, downcast='unsigned')
+ pd.to_numeric(s, downcast='integer')
- ``Index`` now supports ``.str.extractall()`` which returns a ``DataFrame``, see :ref:`documentation here <text.extractall>` (:issue:`10008`, :issue:`13156`)
- ``.to_hdf/read_hdf()`` now accept path objects (e.g. ``pathlib.Path``, ``py.path.local``) for the file path (:issue:`11773`)
diff --git a/pandas/tools/tests/test_util.py b/pandas/tools/tests/test_util.py
index c592b33bdab9a..5b738086a1ad4 100644
--- a/pandas/tools/tests/test_util.py
+++ b/pandas/tools/tests/test_util.py
@@ -291,6 +291,83 @@ def test_non_hashable(self):
with self.assertRaisesRegexp(TypeError, "Invalid object type"):
pd.to_numeric(s)
+ def test_downcast(self):
+ # see gh-13352
+ mixed_data = ['1', 2, 3]
+ int_data = [1, 2, 3]
+ date_data = np.array(['1970-01-02', '1970-01-03',
+ '1970-01-04'], dtype='datetime64[D]')
+
+ invalid_downcast = 'unsigned-integer'
+ msg = 'invalid downcasting method provided'
+
+ smallest_int_dtype = np.dtype(np.typecodes['Integer'][0])
+ smallest_uint_dtype = np.dtype(np.typecodes['UnsignedInteger'][0])
+
+ # support below np.float32 is rare and far between
+ float_32_char = np.dtype(np.float32).char
+ smallest_float_dtype = float_32_char
+
+ for data in (mixed_data, int_data, date_data):
+ with self.assertRaisesRegexp(ValueError, msg):
+ pd.to_numeric(data, downcast=invalid_downcast)
+
+ expected = np.array([1, 2, 3], dtype=np.int64)
+
+ res = pd.to_numeric(data)
+ tm.assert_numpy_array_equal(res, expected)
+
+ res = pd.to_numeric(data, downcast=None)
+ tm.assert_numpy_array_equal(res, expected)
+
+ expected = np.array([1, 2, 3], dtype=smallest_int_dtype)
+
+ for signed_downcast in ('integer', 'signed'):
+ res = pd.to_numeric(data, downcast=signed_downcast)
+ tm.assert_numpy_array_equal(res, expected)
+
+ expected = np.array([1, 2, 3], dtype=smallest_uint_dtype)
+ res = pd.to_numeric(data, downcast='unsigned')
+ tm.assert_numpy_array_equal(res, expected)
+
+ expected = np.array([1, 2, 3], dtype=smallest_float_dtype)
+ res = pd.to_numeric(data, downcast='float')
+ tm.assert_numpy_array_equal(res, expected)
+
+ # if we can't successfully cast the given
+ # data to a numeric dtype, do not bother
+ # with the downcast parameter
+ data = ['foo', 2, 3]
+ expected = np.array(data, dtype=object)
+ res = pd.to_numeric(data, errors='ignore',
+ downcast='unsigned')
+ tm.assert_numpy_array_equal(res, expected)
+
+ # cannot cast to an unsigned integer because
+ # we have a negative number
+ data = ['-1', 2, 3]
+ expected = np.array([-1, 2, 3], dtype=np.int64)
+ res = pd.to_numeric(data, downcast='unsigned')
+ tm.assert_numpy_array_equal(res, expected)
+
+ # cannot cast to an integer (signed or unsigned)
+ # because we have a float number
+ data = ['1.1', 2, 3]
+ expected = np.array([1.1, 2, 3], dtype=np.float64)
+
+ for downcast in ('integer', 'signed', 'unsigned'):
+ res = pd.to_numeric(data, downcast=downcast)
+ tm.assert_numpy_array_equal(res, expected)
+
+ # the smallest integer dtype need not be np.(u)int8
+ data = ['256', 257, 258]
+
+ for downcast, expected_dtype in zip(
+ ['integer', 'signed', 'unsigned'],
+ [np.int16, np.int16, np.uint16]):
+ expected = np.array([256, 257, 258], dtype=expected_dtype)
+ res = pd.to_numeric(data, downcast=downcast)
+ tm.assert_numpy_array_equal(res, expected)
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
diff --git a/pandas/tools/util.py b/pandas/tools/util.py
index 61d2c0adce2fe..d70904e1bf286 100644
--- a/pandas/tools/util.py
+++ b/pandas/tools/util.py
@@ -50,7 +50,7 @@ def compose(*funcs):
return reduce(_compose2, funcs)
-def to_numeric(arg, errors='raise'):
+def to_numeric(arg, errors='raise', downcast=None):
"""
Convert argument to a numeric type.
@@ -61,6 +61,27 @@ def to_numeric(arg, errors='raise'):
- If 'raise', then invalid parsing will raise an exception
- If 'coerce', then invalid parsing will be set as NaN
- If 'ignore', then invalid parsing will return the input
+ downcast : {'integer', 'signed', 'unsigned', 'float'} , default None
+ If not None, and if the data has been successfully cast to a
+ numerical dtype (or if the data was numeric to begin with),
+ downcast that resulting data to the smallest numerical dtype
+ possible according to the following rules:
+
+ - 'integer' or 'signed': smallest signed int dtype (min.: np.int8)
+ - 'unsigned': smallest unsigned int dtype (min.: np.uint8)
+ - 'float': smallest float dtype (min.: np.float32)
+
+ As this behaviour is separate from the core conversion to
+ numeric values, any errors raised during the downcasting
+ will be surfaced regardless of the value of the 'errors' input.
+
+ In addition, downcasting will only occur if the size
+ of the resulting data's dtype is strictly larger than
+ the dtype it is to be cast to, so if none of the dtypes
+ checked satisfy that specification, no downcasting will be
+ performed on the data.
+
+ .. versionadded:: 0.19.0
Returns
-------
@@ -74,10 +95,37 @@ def to_numeric(arg, errors='raise'):
>>> import pandas as pd
>>> s = pd.Series(['1.0', '2', -3])
>>> pd.to_numeric(s)
+ 0 1.0
+ 1 2.0
+ 2 -3.0
+ dtype: float64
+ >>> pd.to_numeric(s, downcast='float')
+ 0 1.0
+ 1 2.0
+ 2 -3.0
+ dtype: float32
+ >>> pd.to_numeric(s, downcast='signed')
+ 0 1
+ 1 2
+ 2 -3
+ dtype: int8
>>> s = pd.Series(['apple', '1.0', '2', -3])
>>> pd.to_numeric(s, errors='ignore')
+ 0 apple
+ 1 1.0
+ 2 2
+ 3 -3
+ dtype: object
>>> pd.to_numeric(s, errors='coerce')
+ 0 NaN
+ 1 1.0
+ 2 2.0
+ 3 -3.0
+ dtype: float64
"""
+ if downcast not in (None, 'integer', 'signed', 'unsigned', 'float'):
+ raise ValueError('invalid downcasting method provided')
+
is_series = False
is_index = False
is_scalar = False
@@ -102,20 +150,51 @@ def to_numeric(arg, errors='raise'):
else:
values = arg
- if com.is_numeric_dtype(values):
- pass
- elif com.is_datetime_or_timedelta_dtype(values):
- values = values.astype(np.int64)
- else:
- values = com._ensure_object(values)
- coerce_numeric = False if errors in ('ignore', 'raise') else True
+ try:
+ if com.is_numeric_dtype(values):
+ pass
+ elif com.is_datetime_or_timedelta_dtype(values):
+ values = values.astype(np.int64)
+ else:
+ values = com._ensure_object(values)
+ coerce_numeric = False if errors in ('ignore', 'raise') else True
- try:
values = lib.maybe_convert_numeric(values, set(),
coerce_numeric=coerce_numeric)
- except:
- if errors == 'raise':
- raise
+
+ except Exception:
+ if errors == 'raise':
+ raise
+
+ # attempt downcast only if the data has been successfully converted
+ # to a numerical dtype and if a downcast method has been specified
+ if downcast is not None and com.is_numeric_dtype(values):
+ typecodes = None
+
+ if downcast in ('integer', 'signed'):
+ typecodes = np.typecodes['Integer']
+ elif downcast == 'unsigned' and np.min(values) > 0:
+ typecodes = np.typecodes['UnsignedInteger']
+ elif downcast == 'float':
+ typecodes = np.typecodes['Float']
+
+ # pandas support goes only to np.float32,
+ # as float dtypes smaller than that are
+ # extremely rare and not well supported
+ float_32_char = np.dtype(np.float32).char
+ float_32_ind = typecodes.index(float_32_char)
+ typecodes = typecodes[float_32_ind:]
+
+ if typecodes is not None:
+ # from smallest to largest
+ for dtype in typecodes:
+ if np.dtype(dtype).itemsize < values.dtype.itemsize:
+ values = com._possibly_downcast_to_dtype(
+ values, dtype)
+
+ # successful conversion
+ if values.dtype == dtype:
+ break
if is_series:
return pd.Series(values, index=arg.index, name=arg.name)
| Title is self-explanatory. Closes #13352.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13425 | 2016-06-10T23:53:24Z | 2016-07-10T21:12:56Z | null | 2016-07-10T21:15:32Z |
CLN: Remove the engine parameter in CSVFormatter and to_csv | diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt
index 657de7ec26efc..1d282e975d7d5 100644
--- a/doc/source/whatsnew/v0.19.0.txt
+++ b/doc/source/whatsnew/v0.19.0.txt
@@ -433,6 +433,15 @@ Deprecations
- ``as_recarray`` has been deprecated in ``pd.read_csv()`` and will be removed in a future version (:issue:`13373`)
- top-level ``pd.ordered_merge()`` has been renamed to ``pd.merge_ordered()`` and the original name will be removed in a future version (:issue:`13358`)
+
+.. _whatsnew_0190.prior_deprecations:
+
+Removal of prior version deprecations/changes
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+- ``DataFrame.to_csv()`` has dropped the ``engine`` parameter (:issue:`11274`, :issue:`13419`)
+
+
.. _whatsnew_0190.performance:
Performance Improvements
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index e804271d8afa9..356abc67b168a 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1342,7 +1342,6 @@ def to_csv(self, path_or_buf=None, sep=",", na_rep='', float_format=None,
cols=columns, header=header, index=index,
index_label=index_label, mode=mode,
chunksize=chunksize, quotechar=quotechar,
- engine=kwds.get("engine"),
tupleize_cols=tupleize_cols,
date_format=date_format,
doublequote=doublequote,
diff --git a/pandas/formats/format.py b/pandas/formats/format.py
index 0c6a15db4ccfe..cc46ed57aeff0 100644
--- a/pandas/formats/format.py
+++ b/pandas/formats/format.py
@@ -30,7 +30,6 @@
import itertools
import csv
-import warnings
common_docstring = """
Parameters
@@ -1326,15 +1325,10 @@ def __init__(self, obj, path_or_buf=None, sep=",", na_rep='',
float_format=None, cols=None, header=True, index=True,
index_label=None, mode='w', nanRep=None, encoding=None,
compression=None, quoting=None, line_terminator='\n',
- chunksize=None, engine=None, tupleize_cols=False,
- quotechar='"', date_format=None, doublequote=True,
- escapechar=None, decimal='.'):
-
- if engine is not None:
- warnings.warn("'engine' keyword is deprecated and will be "
- "removed in a future version", FutureWarning,
- stacklevel=3)
- self.engine = engine # remove for 0.18
+ chunksize=None, tupleize_cols=False, quotechar='"',
+ date_format=None, doublequote=True, escapechar=None,
+ decimal='.'):
+
self.obj = obj
if path_or_buf is None:
@@ -1369,11 +1363,6 @@ def __init__(self, obj, path_or_buf=None, sep=",", na_rep='',
self.date_format = date_format
- # GH3457
- if not self.obj.columns.is_unique and engine == 'python':
- raise NotImplementedError("columns.is_unique == False not "
- "supported with engine='python'")
-
self.tupleize_cols = tupleize_cols
self.has_mi_columns = (isinstance(obj.columns, MultiIndex) and
not self.tupleize_cols)
@@ -1430,108 +1419,6 @@ def __init__(self, obj, path_or_buf=None, sep=",", na_rep='',
if not index:
self.nlevels = 0
- # original python implem. of df.to_csv
- # invoked by df.to_csv(engine=python)
- def _helper_csv(self, writer, na_rep=None, cols=None, header=True,
- index=True, index_label=None, float_format=None,
- date_format=None):
- if cols is None:
- cols = self.columns
-
- has_aliases = isinstance(header, (tuple, list, np.ndarray, Index))
- if has_aliases or header:
- if index:
- # should write something for index label
- if index_label is not False:
- if index_label is None:
- if isinstance(self.obj.index, MultiIndex):
- index_label = []
- for i, name in enumerate(self.obj.index.names):
- if name is None:
- name = ''
- index_label.append(name)
- else:
- index_label = self.obj.index.name
- if index_label is None:
- index_label = ['']
- else:
- index_label = [index_label]
- elif not isinstance(index_label,
- (list, tuple, np.ndarray, Index)):
- # given a string for a DF with Index
- index_label = [index_label]
-
- encoded_labels = list(index_label)
- else:
- encoded_labels = []
-
- if has_aliases:
- if len(header) != len(cols):
- raise ValueError(('Writing %d cols but got %d aliases'
- % (len(cols), len(header))))
- else:
- write_cols = header
- else:
- write_cols = cols
- encoded_cols = list(write_cols)
-
- writer.writerow(encoded_labels + encoded_cols)
- else:
- encoded_cols = list(cols)
- writer.writerow(encoded_cols)
-
- if date_format is None:
- date_formatter = lambda x: Timestamp(x)._repr_base
- else:
-
- def strftime_with_nulls(x):
- x = Timestamp(x)
- if notnull(x):
- return x.strftime(date_format)
-
- date_formatter = lambda x: strftime_with_nulls(x)
-
- data_index = self.obj.index
-
- if isinstance(self.obj.index, PeriodIndex):
- data_index = self.obj.index.to_timestamp()
-
- if isinstance(data_index, DatetimeIndex) and date_format is not None:
- data_index = Index([date_formatter(x) for x in data_index])
-
- values = self.obj.copy()
- values.index = data_index
- values.columns = values.columns.to_native_types(
- na_rep=na_rep, float_format=float_format, date_format=date_format,
- quoting=self.quoting)
- values = values[cols]
-
- series = {}
- for k, v in compat.iteritems(values._series):
- series[k] = v._values
-
- nlevels = getattr(data_index, 'nlevels', 1)
- for j, idx in enumerate(data_index):
- row_fields = []
- if index:
- if nlevels == 1:
- row_fields = [idx]
- else: # handle MultiIndex
- row_fields = list(idx)
- for i, col in enumerate(cols):
- val = series[col][j]
- if lib.checknull(val):
- val = na_rep
-
- if float_format is not None and com.is_float(val):
- val = float_format % val
- elif isinstance(val, (np.datetime64, Timestamp)):
- val = date_formatter(val)
-
- row_fields.append(val)
-
- writer.writerow(row_fields)
-
def save(self):
# create the writer & save
if hasattr(self.path_or_buf, 'write'):
@@ -1555,17 +1442,7 @@ def save(self):
else:
self.writer = csv.writer(f, **writer_kwargs)
- if self.engine == 'python':
- # to be removed in 0.13
- self._helper_csv(self.writer, na_rep=self.na_rep,
- float_format=self.float_format,
- cols=self.cols, header=self.header,
- index=self.index,
- index_label=self.index_label,
- date_format=self.date_format)
-
- else:
- self._save()
+ self._save()
finally:
if close:
diff --git a/pandas/tests/formats/test_format.py b/pandas/tests/formats/test_format.py
index c5e9c258b293a..7a282e7eb14ad 100644
--- a/pandas/tests/formats/test_format.py
+++ b/pandas/tests/formats/test_format.py
@@ -3329,12 +3329,6 @@ def test_to_csv_date_format(self):
self.assertEqual(df_sec_grouped.mean().to_csv(date_format='%Y-%m-%d'),
expected_ymd_sec)
- # deprecation GH11274
- def test_to_csv_engine_kw_deprecation(self):
- with tm.assert_produces_warning(FutureWarning):
- df = DataFrame({'col1': [1], 'col2': ['a'], 'col3': [10.1]})
- df.to_csv(engine='python')
-
def test_period(self):
# GH 12615
df = pd.DataFrame({'A': pd.period_range('2013-01',
diff --git a/pandas/tests/frame/test_to_csv.py b/pandas/tests/frame/test_to_csv.py
index c23702ef46ad2..55c7ebb183ce5 100644
--- a/pandas/tests/frame/test_to_csv.py
+++ b/pandas/tests/frame/test_to_csv.py
@@ -10,7 +10,7 @@
from pandas.compat import (lmap, range, lrange, StringIO, u)
from pandas.parser import CParserError
from pandas import (DataFrame, Index, Series, MultiIndex, Timestamp,
- date_range, read_csv, compat)
+ date_range, read_csv, compat, to_datetime)
import pandas as pd
from pandas.util.testing import (assert_almost_equal,
@@ -139,7 +139,7 @@ def test_to_csv_from_csv5(self):
self.tzframe.to_csv(path)
result = pd.read_csv(path, index_col=0, parse_dates=['A'])
- converter = lambda c: pd.to_datetime(result[c]).dt.tz_localize(
+ converter = lambda c: to_datetime(result[c]).dt.tz_localize(
'UTC').dt.tz_convert(self.tzframe[c].dt.tz)
result['B'] = converter('B')
result['C'] = converter('C')
@@ -162,15 +162,6 @@ def test_to_csv_cols_reordering(self):
assert_frame_equal(df[cols], rs_c, check_names=False)
- def test_to_csv_legacy_raises_on_dupe_cols(self):
- df = mkdf(10, 3)
- df.columns = ['a', 'a', 'b']
- with ensure_clean() as path:
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- self.assertRaises(NotImplementedError,
- df.to_csv, path, engine='python')
-
def test_to_csv_new_dupe_cols(self):
import pandas as pd
@@ -712,7 +703,6 @@ def test_to_csv_dups_cols(self):
cols.extend([0, 1, 2])
df.columns = cols
- from pandas import to_datetime
with ensure_clean() as filename:
df.to_csv(filename)
result = read_csv(filename, index_col=0)
@@ -993,72 +983,57 @@ def test_to_csv_compression_value_error(self):
filename, compression="zip")
def test_to_csv_date_format(self):
- from pandas import to_datetime
with ensure_clean('__tmp_to_csv_date_format__') as path:
- for engine in [None, 'python']:
- w = FutureWarning if engine == 'python' else None
-
- dt_index = self.tsframe.index
- datetime_frame = DataFrame(
- {'A': dt_index, 'B': dt_index.shift(1)}, index=dt_index)
-
- with tm.assert_produces_warning(w, check_stacklevel=False):
- datetime_frame.to_csv(
- path, date_format='%Y%m%d', engine=engine)
-
- # Check that the data was put in the specified format
- test = read_csv(path, index_col=0)
-
- datetime_frame_int = datetime_frame.applymap(
- lambda x: int(x.strftime('%Y%m%d')))
- datetime_frame_int.index = datetime_frame_int.index.map(
- lambda x: int(x.strftime('%Y%m%d')))
+ dt_index = self.tsframe.index
+ datetime_frame = DataFrame(
+ {'A': dt_index, 'B': dt_index.shift(1)}, index=dt_index)
+ datetime_frame.to_csv(path, date_format='%Y%m%d')
- assert_frame_equal(test, datetime_frame_int)
+ # Check that the data was put in the specified format
+ test = read_csv(path, index_col=0)
- with tm.assert_produces_warning(w, check_stacklevel=False):
- datetime_frame.to_csv(
- path, date_format='%Y-%m-%d', engine=engine)
+ datetime_frame_int = datetime_frame.applymap(
+ lambda x: int(x.strftime('%Y%m%d')))
+ datetime_frame_int.index = datetime_frame_int.index.map(
+ lambda x: int(x.strftime('%Y%m%d')))
- # Check that the data was put in the specified format
- test = read_csv(path, index_col=0)
- datetime_frame_str = datetime_frame.applymap(
- lambda x: x.strftime('%Y-%m-%d'))
- datetime_frame_str.index = datetime_frame_str.index.map(
- lambda x: x.strftime('%Y-%m-%d'))
+ assert_frame_equal(test, datetime_frame_int)
- assert_frame_equal(test, datetime_frame_str)
+ datetime_frame.to_csv(path, date_format='%Y-%m-%d')
- # Check that columns get converted
- datetime_frame_columns = datetime_frame.T
+ # Check that the data was put in the specified format
+ test = read_csv(path, index_col=0)
+ datetime_frame_str = datetime_frame.applymap(
+ lambda x: x.strftime('%Y-%m-%d'))
+ datetime_frame_str.index = datetime_frame_str.index.map(
+ lambda x: x.strftime('%Y-%m-%d'))
- with tm.assert_produces_warning(w, check_stacklevel=False):
- datetime_frame_columns.to_csv(
- path, date_format='%Y%m%d', engine=engine)
+ assert_frame_equal(test, datetime_frame_str)
- test = read_csv(path, index_col=0)
+ # Check that columns get converted
+ datetime_frame_columns = datetime_frame.T
+ datetime_frame_columns.to_csv(path, date_format='%Y%m%d')
- datetime_frame_columns = datetime_frame_columns.applymap(
- lambda x: int(x.strftime('%Y%m%d')))
- # Columns don't get converted to ints by read_csv
- datetime_frame_columns.columns = (
- datetime_frame_columns.columns
- .map(lambda x: x.strftime('%Y%m%d')))
+ test = read_csv(path, index_col=0)
- assert_frame_equal(test, datetime_frame_columns)
+ datetime_frame_columns = datetime_frame_columns.applymap(
+ lambda x: int(x.strftime('%Y%m%d')))
+ # Columns don't get converted to ints by read_csv
+ datetime_frame_columns.columns = (
+ datetime_frame_columns.columns
+ .map(lambda x: x.strftime('%Y%m%d')))
- # test NaTs
- nat_index = to_datetime(
- ['NaT'] * 10 + ['2000-01-01', '1/1/2000', '1-1-2000'])
- nat_frame = DataFrame({'A': nat_index}, index=nat_index)
+ assert_frame_equal(test, datetime_frame_columns)
- with tm.assert_produces_warning(w, check_stacklevel=False):
- nat_frame.to_csv(
- path, date_format='%Y-%m-%d', engine=engine)
+ # test NaTs
+ nat_index = to_datetime(
+ ['NaT'] * 10 + ['2000-01-01', '1/1/2000', '1-1-2000'])
+ nat_frame = DataFrame({'A': nat_index}, index=nat_index)
+ nat_frame.to_csv(path, date_format='%Y-%m-%d')
- test = read_csv(path, parse_dates=[0, 1], index_col=0)
+ test = read_csv(path, parse_dates=[0, 1], index_col=0)
- assert_frame_equal(test, nat_frame)
+ assert_frame_equal(test, nat_frame)
def test_to_csv_with_dst_transitions(self):
@@ -1077,7 +1052,7 @@ def test_to_csv_with_dst_transitions(self):
# we have to reconvert the index as we
# don't parse the tz's
result = read_csv(path, index_col=0)
- result.index = pd.to_datetime(result.index).tz_localize(
+ result.index = to_datetime(result.index).tz_localize(
'UTC').tz_convert('Europe/London')
assert_frame_equal(result, df)
@@ -1089,9 +1064,9 @@ def test_to_csv_with_dst_transitions(self):
with ensure_clean('csv_date_format_with_dst') as path:
df.to_csv(path, index=True)
result = read_csv(path, index_col=0)
- result.index = pd.to_datetime(result.index).tz_localize(
+ result.index = to_datetime(result.index).tz_localize(
'UTC').tz_convert('Europe/Paris')
- result['idx'] = pd.to_datetime(result['idx']).astype(
+ result['idx'] = to_datetime(result['idx']).astype(
'datetime64[ns, Europe/Paris]')
assert_frame_equal(result, df)
| Title is self-explanatory.
Internal code and #6581 indicate that this was long overdue.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13419 | 2016-06-10T06:05:26Z | 2016-07-10T21:50:36Z | null | 2016-07-10T21:52:56Z |
BUG: Fix csv.QUOTE_NONNUMERIC quoting in to_csv | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index f5dbfd80de7cc..b3ce9911d3f4d 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -388,6 +388,8 @@ Bug Fixes
- Bug in various index types, which did not propagate the name of passed index (:issue:`12309`)
- Bug in ``DatetimeIndex``, which did not honour the ``copy=True`` (:issue:`13205`)
+
+- Bug in ``DataFrame.to_csv()`` in which float values were being quoted even though quotations were specified for non-numeric values only (:issue:`12922`, :issue:`13259`)
- Bug in ``MultiIndex`` slicing where extra elements were returned when level is non-unique (:issue:`12896`)
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 97df81ad6be48..c931adc9a31df 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -1529,6 +1529,20 @@ def to_native_types(self, slicer=None, na_rep='', float_format=None,
if slicer is not None:
values = values[:, slicer]
+ # see gh-13418: no special formatting is desired at the
+ # output (important for appropriate 'quoting' behaviour),
+ # so do not pass it through the FloatArrayFormatter
+ if float_format is None and decimal == '.':
+ mask = isnull(values)
+
+ if not quoting:
+ values = values.astype(str)
+ else:
+ values = np.array(values, dtype='object')
+
+ values[mask] = na_rep
+ return values
+
from pandas.formats.format import FloatArrayFormatter
formatter = FloatArrayFormatter(values, na_rep=na_rep,
float_format=float_format,
diff --git a/pandas/formats/format.py b/pandas/formats/format.py
index 923ac25f0ebed..a8e184ce94c89 100644
--- a/pandas/formats/format.py
+++ b/pandas/formats/format.py
@@ -1,4 +1,9 @@
# -*- coding: utf-8 -*-
+"""
+Internal module for formatting output data in csv, html,
+and latex files. This module also applies to display formatting.
+"""
+
from __future__ import print_function
from distutils.version import LooseVersion
# pylint: disable=W0141
diff --git a/pandas/tests/frame/test_to_csv.py b/pandas/tests/frame/test_to_csv.py
index bacf604c491b1..c23702ef46ad2 100644
--- a/pandas/tests/frame/test_to_csv.py
+++ b/pandas/tests/frame/test_to_csv.py
@@ -824,35 +824,6 @@ def test_to_csv_float_format(self):
index=['A', 'B'], columns=['X', 'Y', 'Z'])
assert_frame_equal(rs, xp)
- def test_to_csv_quoting(self):
- df = DataFrame({'A': [1, 2, 3], 'B': ['foo', 'bar', 'baz']})
-
- buf = StringIO()
- df.to_csv(buf, index=False, quoting=csv.QUOTE_NONNUMERIC)
-
- result = buf.getvalue()
- expected = ('"A","B"\n'
- '1,"foo"\n'
- '2,"bar"\n'
- '3,"baz"\n')
-
- self.assertEqual(result, expected)
-
- # quoting windows line terminators, presents with encoding?
- # #3503
- text = 'a,b,c\n1,"test \r\n",3\n'
- df = pd.read_csv(StringIO(text))
- buf = StringIO()
- df.to_csv(buf, encoding='utf-8', index=False)
- self.assertEqual(buf.getvalue(), text)
-
- # testing if quoting parameter is passed through with multi-indexes
- # related to issue #7791
- df = pd.DataFrame({'a': [1, 2], 'b': [3, 4], 'c': [5, 6]})
- df = df.set_index(['a', 'b'])
- expected = '"a","b","c"\n"1","3","5"\n"2","4","6"\n'
- self.assertEqual(df.to_csv(quoting=csv.QUOTE_ALL), expected)
-
def test_to_csv_unicodewriter_quoting(self):
df = DataFrame({'A': [1, 2, 3], 'B': ['foo', 'bar', 'baz']})
@@ -1131,3 +1102,83 @@ def test_to_csv_with_dst_transitions(self):
df.to_pickle(path)
result = pd.read_pickle(path)
assert_frame_equal(result, df)
+
+ def test_to_csv_quoting(self):
+ df = DataFrame({
+ 'c_string': ['a', 'b,c'],
+ 'c_int': [42, np.nan],
+ 'c_float': [1.0, 3.2],
+ 'c_bool': [True, False],
+ })
+
+ expected = """\
+,c_bool,c_float,c_int,c_string
+0,True,1.0,42.0,a
+1,False,3.2,,"b,c"
+"""
+ result = df.to_csv()
+ self.assertEqual(result, expected)
+
+ result = df.to_csv(quoting=None)
+ self.assertEqual(result, expected)
+
+ result = df.to_csv(quoting=csv.QUOTE_MINIMAL)
+ self.assertEqual(result, expected)
+
+ expected = """\
+"","c_bool","c_float","c_int","c_string"
+"0","True","1.0","42.0","a"
+"1","False","3.2","","b,c"
+"""
+ result = df.to_csv(quoting=csv.QUOTE_ALL)
+ self.assertEqual(result, expected)
+
+ # see gh-12922, gh-13259: make sure changes to
+ # the formatters do not break this behaviour
+ expected = """\
+"","c_bool","c_float","c_int","c_string"
+0,True,1.0,42.0,"a"
+1,False,3.2,"","b,c"
+"""
+ result = df.to_csv(quoting=csv.QUOTE_NONNUMERIC)
+ self.assertEqual(result, expected)
+
+ msg = "need to escape, but no escapechar set"
+ tm.assertRaisesRegexp(csv.Error, msg, df.to_csv,
+ quoting=csv.QUOTE_NONE)
+ tm.assertRaisesRegexp(csv.Error, msg, df.to_csv,
+ quoting=csv.QUOTE_NONE,
+ escapechar=None)
+
+ expected = """\
+,c_bool,c_float,c_int,c_string
+0,True,1.0,42.0,a
+1,False,3.2,,b!,c
+"""
+ result = df.to_csv(quoting=csv.QUOTE_NONE,
+ escapechar='!')
+ self.assertEqual(result, expected)
+
+ expected = """\
+,c_bool,c_ffloat,c_int,c_string
+0,True,1.0,42.0,a
+1,False,3.2,,bf,c
+"""
+ result = df.to_csv(quoting=csv.QUOTE_NONE,
+ escapechar='f')
+ self.assertEqual(result, expected)
+
+ # see gh-3503: quoting Windows line terminators
+ # presents with encoding?
+ text = 'a,b,c\n1,"test \r\n",3\n'
+ df = pd.read_csv(StringIO(text))
+ buf = StringIO()
+ df.to_csv(buf, encoding='utf-8', index=False)
+ self.assertEqual(buf.getvalue(), text)
+
+ # xref gh-7791: make sure the quoting parameter is passed through
+ # with multi-indexes
+ df = pd.DataFrame({'a': [1, 2], 'b': [3, 4], 'c': [5, 6]})
+ df = df.set_index(['a', 'b'])
+ expected = '"a","b","c"\n"1","3","5"\n"2","4","6"\n'
+ self.assertEqual(df.to_csv(quoting=csv.QUOTE_ALL), expected)
| Closes #12922: "bug" traced to #12194 via bisection, where the float formatting was unconditionally casting everything to string.
I say "bug" (with quotations) because the changes to `get_result_as_array` did **correctly** cast everything to string as per the documentation (i.e. it had inadvertently patched a bug itself even though it was just a cleaning PR). However, the changes had overlooked the impact it would have on `to_csv`.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13418 | 2016-06-10T05:22:02Z | 2016-06-16T12:32:24Z | null | 2016-06-16T12:32:47Z |
DOC- typo fix and adding correct command for environment deactivation… | diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
index a9b86925666b7..8235eacad0b0a 100644
--- a/doc/source/contributing.rst
+++ b/doc/source/contributing.rst
@@ -198,10 +198,14 @@ To view your environments::
conda info -e
-To return to you home root environment::
+To return to your home root environment in windows::
deactivate
+To return to your home root environment in linux::
+
+ source deactivate
+
See the full conda docs `here <http://conda.pydata.org/docs>`__.
At this point you can easily do an *in-place* install, as detailed in the next section.
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master | flake8 --diff`
- [ ] whatsnew entry
… for windows and linux
| https://api.github.com/repos/pandas-dev/pandas/pulls/13413 | 2016-06-09T18:15:56Z | 2016-06-09T19:16:27Z | null | 2016-06-09T19:43:58Z |
BUG: Fix inconsistent C engine quoting behaviour | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 61625104f5c1d..b011072d8c3fb 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -287,11 +287,10 @@ lineterminator : str (length 1), default ``None``
quotechar : str (length 1)
The character used to denote the start and end of a quoted item. Quoted items
can include the delimiter and it will be ignored.
-quoting : int or ``csv.QUOTE_*`` instance, default ``None``
+quoting : int or ``csv.QUOTE_*`` instance, default ``0``
Control field quoting behavior per ``csv.QUOTE_*`` constants. Use one of
``QUOTE_MINIMAL`` (0), ``QUOTE_ALL`` (1), ``QUOTE_NONNUMERIC`` (2) or
- ``QUOTE_NONE`` (3). Default (``None``) results in ``QUOTE_MINIMAL``
- behavior.
+ ``QUOTE_NONE`` (3).
doublequote : boolean, default ``True``
When ``quotechar`` is specified and ``quoting`` is not ``QUOTE_NONE``,
indicate whether or not to interpret two consecutive ``quotechar`` elements
diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 6bc152aad6b01..ba79dce43acc6 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -492,6 +492,8 @@ Bug Fixes
- Bug in ``pd.read_csv()`` with ``engine='python'`` in which trailing ``NaN`` values were not being parsed (:issue:`13320`)
- Bug in ``pd.read_csv()`` that prevents ``usecols`` kwarg from accepting single-byte unicode strings (:issue:`13219`)
- Bug in ``pd.read_csv()`` that prevents ``usecols`` from being an empty set (:issue:`13402`)
+- Bug in ``pd.read_csv()`` with ``engine=='c'`` in which null ``quotechar`` was not accepted even though ``quoting`` was specified as ``None`` (:issue:`13411`)
+- Bug in ``pd.read_csv()`` with ``engine=='c'`` in which fields were not properly cast to float when quoting was specified as non-numeric (:issue:`13411`)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 475eb73812666..9baff67845dac 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -202,10 +202,9 @@
quotechar : str (length 1), optional
The character used to denote the start and end of a quoted item. Quoted
items can include the delimiter and it will be ignored.
-quoting : int or csv.QUOTE_* instance, default None
+quoting : int or csv.QUOTE_* instance, default 0
Control field quoting behavior per ``csv.QUOTE_*`` constants. Use one of
QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
- Default (None) results in QUOTE_MINIMAL behavior.
doublequote : boolean, default ``True``
When quotechar is specified and quoting is not ``QUOTE_NONE``, indicate
whether or not to interpret two consecutive quotechar elements INSIDE a
diff --git a/pandas/io/tests/parser/quoting.py b/pandas/io/tests/parser/quoting.py
new file mode 100644
index 0000000000000..d0f1493be0621
--- /dev/null
+++ b/pandas/io/tests/parser/quoting.py
@@ -0,0 +1,140 @@
+# -*- coding: utf-8 -*-
+
+"""
+Tests that quoting specifications are properly handled
+during parsing for all of the parsers defined in parsers.py
+"""
+
+import csv
+import pandas.util.testing as tm
+
+from pandas import DataFrame
+from pandas.compat import StringIO
+
+
+class QuotingTests(object):
+
+ def test_bad_quote_char(self):
+ data = '1,2,3'
+
+ # Python 2.x: "...must be an 1-character..."
+ # Python 3.x: "...must be a 1-character..."
+ msg = '"quotechar" must be a(n)? 1-character string'
+ tm.assertRaisesRegexp(TypeError, msg, self.read_csv,
+ StringIO(data), quotechar='foo')
+
+ msg = 'quotechar must be set if quoting enabled'
+ tm.assertRaisesRegexp(TypeError, msg, self.read_csv,
+ StringIO(data), quotechar=None,
+ quoting=csv.QUOTE_MINIMAL)
+
+ msg = '"quotechar" must be string, not int'
+ tm.assertRaisesRegexp(TypeError, msg, self.read_csv,
+ StringIO(data), quotechar=2)
+
+ def test_bad_quoting(self):
+ data = '1,2,3'
+
+ msg = '"quoting" must be an integer'
+ tm.assertRaisesRegexp(TypeError, msg, self.read_csv,
+ StringIO(data), quoting='foo')
+
+ # quoting must in the range [0, 3]
+ msg = 'bad "quoting" value'
+ tm.assertRaisesRegexp(TypeError, msg, self.read_csv,
+ StringIO(data), quoting=5)
+
+ def test_quote_char_basic(self):
+ data = 'a,b,c\n1,2,"cat"'
+ expected = DataFrame([[1, 2, 'cat']],
+ columns=['a', 'b', 'c'])
+ result = self.read_csv(StringIO(data), quotechar='"')
+ tm.assert_frame_equal(result, expected)
+
+ def test_quote_char_various(self):
+ data = 'a,b,c\n1,2,"cat"'
+ expected = DataFrame([[1, 2, 'cat']],
+ columns=['a', 'b', 'c'])
+ quote_chars = ['~', '*', '%', '$', '@', 'P']
+
+ for quote_char in quote_chars:
+ new_data = data.replace('"', quote_char)
+ result = self.read_csv(StringIO(new_data), quotechar=quote_char)
+ tm.assert_frame_equal(result, expected)
+
+ def test_null_quote_char(self):
+ data = 'a,b,c\n1,2,3'
+
+ # sanity checks
+ msg = 'quotechar must be set if quoting enabled'
+
+ tm.assertRaisesRegexp(TypeError, msg, self.read_csv,
+ StringIO(data), quotechar=None,
+ quoting=csv.QUOTE_MINIMAL)
+
+ tm.assertRaisesRegexp(TypeError, msg, self.read_csv,
+ StringIO(data), quotechar='',
+ quoting=csv.QUOTE_MINIMAL)
+
+ # no errors should be raised if quoting is None
+ expected = DataFrame([[1, 2, 3]],
+ columns=['a', 'b', 'c'])
+
+ result = self.read_csv(StringIO(data), quotechar=None,
+ quoting=csv.QUOTE_NONE)
+ tm.assert_frame_equal(result, expected)
+
+ result = self.read_csv(StringIO(data), quotechar='',
+ quoting=csv.QUOTE_NONE)
+ tm.assert_frame_equal(result, expected)
+
+ def test_quoting_various(self):
+ data = '1,2,"foo"'
+ cols = ['a', 'b', 'c']
+
+ # QUOTE_MINIMAL and QUOTE_ALL apply only to
+ # the CSV writer, so they should have no
+ # special effect for the CSV reader
+ expected = DataFrame([[1, 2, 'foo']], columns=cols)
+
+ # test default (afterwards, arguments are all explicit)
+ result = self.read_csv(StringIO(data), names=cols)
+ tm.assert_frame_equal(result, expected)
+
+ result = self.read_csv(StringIO(data), quotechar='"',
+ quoting=csv.QUOTE_MINIMAL, names=cols)
+ tm.assert_frame_equal(result, expected)
+
+ result = self.read_csv(StringIO(data), quotechar='"',
+ quoting=csv.QUOTE_ALL, names=cols)
+ tm.assert_frame_equal(result, expected)
+
+ # QUOTE_NONE tells the reader to do no special handling
+ # of quote characters and leave them alone
+ expected = DataFrame([[1, 2, '"foo"']], columns=cols)
+ result = self.read_csv(StringIO(data), quotechar='"',
+ quoting=csv.QUOTE_NONE, names=cols)
+ tm.assert_frame_equal(result, expected)
+
+ # QUOTE_NONNUMERIC tells the reader to cast
+ # all non-quoted fields to float
+ expected = DataFrame([[1.0, 2.0, 'foo']], columns=cols)
+ result = self.read_csv(StringIO(data), quotechar='"',
+ quoting=csv.QUOTE_NONNUMERIC,
+ names=cols)
+ tm.assert_frame_equal(result, expected)
+
+ def test_double_quote(self):
+ data = 'a,b\n3,"4 "" 5"'
+
+ expected = DataFrame([[3, '4 " 5']],
+ columns=['a', 'b'])
+ result = self.read_csv(StringIO(data), quotechar='"',
+ doublequote=True)
+ tm.assert_frame_equal(result, expected)
+
+ expected = DataFrame([[3, '4 " 5"']],
+ columns=['a', 'b'])
+ result = self.read_csv(StringIO(data), quotechar='"',
+ doublequote=False)
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/io/tests/parser/test_parsers.py b/pandas/io/tests/parser/test_parsers.py
index fda7b28769647..21f903342a611 100644
--- a/pandas/io/tests/parser/test_parsers.py
+++ b/pandas/io/tests/parser/test_parsers.py
@@ -11,6 +11,7 @@
from .common import ParserTests
from .header import HeaderTests
from .comment import CommentTests
+from .quoting import QuotingTests
from .usecols import UsecolsTests
from .skiprows import SkipRowsTests
from .index_col import IndexColTests
@@ -28,7 +29,7 @@ class BaseParser(CommentTests, CompressionTests,
IndexColTests, MultithreadTests,
NAvaluesTests, ParseDatesTests,
ParserTests, SkipRowsTests,
- UsecolsTests):
+ UsecolsTests, QuotingTests):
def read_csv(self, *args, **kwargs):
raise NotImplementedError
diff --git a/pandas/parser.pyx b/pandas/parser.pyx
index 063b2158d999a..3928bc8472113 100644
--- a/pandas/parser.pyx
+++ b/pandas/parser.pyx
@@ -7,6 +7,7 @@ from libc.string cimport strncpy, strlen, strcmp, strcasecmp
cimport libc.stdio as stdio
import warnings
+from csv import QUOTE_MINIMAL, QUOTE_NONNUMERIC, QUOTE_NONE
from cpython cimport (PyObject, PyBytes_FromString,
PyBytes_AsString, PyBytes_Check,
PyUnicode_Check, PyUnicode_AsUTF8String)
@@ -283,6 +284,7 @@ cdef class TextReader:
object compression
object mangle_dupe_cols
object tupleize_cols
+ list dtype_cast_order
set noconvert, usecols
def __cinit__(self, source,
@@ -393,8 +395,13 @@ cdef class TextReader:
raise ValueError('Only length-1 escapes supported')
self.parser.escapechar = ord(escapechar)
- self.parser.quotechar = ord(quotechar)
- self.parser.quoting = quoting
+ self._set_quoting(quotechar, quoting)
+
+ # TODO: endianness just a placeholder?
+ if quoting == QUOTE_NONNUMERIC:
+ self.dtype_cast_order = ['<f8', '<i8', '|b1', '|O8']
+ else:
+ self.dtype_cast_order = ['<i8', '<f8', '|b1', '|O8']
if comment is not None:
if len(comment) > 1:
@@ -548,6 +555,29 @@ cdef class TextReader:
def set_error_bad_lines(self, int status):
self.parser.error_bad_lines = status
+ def _set_quoting(self, quote_char, quoting):
+ if not isinstance(quoting, int):
+ raise TypeError('"quoting" must be an integer')
+
+ if not QUOTE_MINIMAL <= quoting <= QUOTE_NONE:
+ raise TypeError('bad "quoting" value')
+
+ if not isinstance(quote_char, (str, bytes)) and quote_char is not None:
+ dtype = type(quote_char).__name__
+ raise TypeError('"quotechar" must be string, '
+ 'not {dtype}'.format(dtype=dtype))
+
+ if quote_char is None or quote_char == '':
+ if quoting != QUOTE_NONE:
+ raise TypeError("quotechar must be set if quoting enabled")
+ self.parser.quoting = quoting
+ self.parser.quotechar = -1
+ elif len(quote_char) > 1: # 0-len case handled earlier
+ raise TypeError('"quotechar" must be a 1-character string')
+ else:
+ self.parser.quoting = quoting
+ self.parser.quotechar = ord(quote_char)
+
cdef _make_skiprow_set(self):
if isinstance(self.skiprows, (int, np.integer)):
parser_set_skipfirstnrows(self.parser, self.skiprows)
@@ -1066,7 +1096,7 @@ cdef class TextReader:
return self._string_convert(i, start, end, na_filter, na_hashset)
else:
col_res = None
- for dt in dtype_cast_order:
+ for dt in self.dtype_cast_order:
try:
col_res, na_count = self._convert_with_dtype(
dt, i, start, end, na_filter, 0, na_hashset, na_flist)
@@ -1847,12 +1877,6 @@ cdef kh_float64_t* kset_float64_from_list(values) except NULL:
return table
-# if at first you don't succeed...
-
-# TODO: endianness just a placeholder?
-cdef list dtype_cast_order = ['<i8', '<f8', '|b1', '|O8']
-
-
cdef raise_parser_error(object base, parser_t *parser):
message = '%s. C error: ' % base
if parser.error_msg != NULL:
| 1) Add significant testing to quoting in `read_csv`
2) Fix bug in C engine in which a NULL `quotechar` would raise even though `quoting=csv.QUOTE_NONE`.
3) Fix bug in C engine in which `quoting=csv.QUOTE_NONNUMERIC` wouldn't cause non-quoted fields to be cast to `float`. Relevant definitions can be found in the Python docs <a href="https://docs.python.org/3.5/library/csv.html">here</a>.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13411 | 2016-06-09T13:18:35Z | 2016-06-17T16:39:07Z | null | 2016-06-17T17:10:32Z |
ENH: parse categoricals in read_csv | diff --git a/asv_bench/benchmarks/parser_vb.py b/asv_bench/benchmarks/parser_vb.py
index 04f25034638cd..6dc8bffd6dac9 100644
--- a/asv_bench/benchmarks/parser_vb.py
+++ b/asv_bench/benchmarks/parser_vb.py
@@ -114,6 +114,27 @@ def teardown(self):
os.remove('test.csv')
+class read_csv_categorical(object):
+ goal_time = 0.2
+
+ def setup(self):
+ N = 100000
+ group1 = ['aaaaaaaa', 'bbbbbbb', 'cccccccc', 'dddddddd', 'eeeeeeee']
+ df = DataFrame({'a': np.random.choice(group1, N).astype('object'),
+ 'b': np.random.choice(group1, N).astype('object'),
+ 'c': np.random.choice(group1, N).astype('object')})
+ df.to_csv('strings.csv', index=False)
+
+ def time_read_csv_categorical_post(self):
+ read_csv('strings.csv').apply(pd.Categorical)
+
+ def time_read_csv_categorical_direct(self):
+ read_csv('strings.csv', dtype='category')
+
+ def teardown(self):
+ os.remove('strings.csv')
+
+
class read_table_multiple_date(object):
goal_time = 0.2
diff --git a/doc/source/io.rst b/doc/source/io.rst
index 2866371cce61a..7917e6b4cdfce 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -500,6 +500,43 @@ worth trying.
data that was read in. It is important to note that the overall column will be
marked with a ``dtype`` of ``object``, which is used for columns with mixed dtypes.
+.. _io.categorical:
+
+Specifying Categorical dtype
+''''''''''''''''''''''''''''
+
+.. versionadded:: 0.19.0
+
+``Categorical`` columns can be parsed directly by specifying ``dtype='category'``
+
+.. ipython:: python
+
+ data = 'col1,col2,col3\na,b,1\na,b,2\nc,d,3'
+
+ pd.read_csv(StringIO(data))
+ pd.read_csv(StringIO(data)).dtypes
+ pd.read_csv(StringIO(data), dtype='category').dtypes
+
+Individual columns can be parsed as a ``Categorical`` using a dict specification
+
+.. ipython:: python
+
+ pd.read_csv(StringIO(data), dtype={'col1': 'category'}).dtypes
+
+.. note::
+
+ The resulting categories will always be parsed as strings (object dtype).
+ If the categories are numeric they can be converted using the
+ :func:`to_numeric` function, or as appropriate, another converter
+ such as :func:`to_datetime`.
+
+ .. ipython:: python
+
+ df = pd.read_csv(StringIO(data), dtype='category')
+ df.dtypes
+ df['col3']
+ df['col3'].cat.categories = pd.to_numeric(df['col3'].cat.categories)
+ df['col3']
Naming and Using Columns
diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt
index 59a106291dad8..6c995a6989a38 100644
--- a/doc/source/whatsnew/v0.19.0.txt
+++ b/doc/source/whatsnew/v0.19.0.txt
@@ -12,6 +12,7 @@ Highlights include:
- :func:`merge_asof` for asof-style time-series joining, see :ref:`here <whatsnew_0190.enhancements.asof_merge>`
- ``.rolling()`` are now time-series aware, see :ref:`here <whatsnew_0190.enhancements.rolling_ts>`
- pandas development api, see :ref:`here <whatsnew_0190.dev_api>`
+- :func:`read_csv` now supports parsing ``Categorical`` data, see :ref:`here <whatsnew_0190.enhancements.read_csv_categorical>`
.. contents:: What's new in v0.19.0
:local:
@@ -195,6 +196,14 @@ default of the index) in a DataFrame.
:func:`read_csv` has improved support for duplicate column names
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+.. ipython:: python
+ :suppress:
+
+ from pandas.compat import StringIO
+
+.. _whatsnew_0190.enhancements.read_csv_dupe_col_names_support:
+
+
:ref:`Duplicate column names <io.dupe_names>` are now supported in :func:`read_csv` whether
they are in the file or passed in as the ``names`` parameter (:issue:`7160`, :issue:`9424`)
@@ -222,6 +231,46 @@ New behaviour:
In [2]: pd.read_csv(StringIO(data), names=names)
+
+.. _whatsnew_0190.enhancements.read_csv_categorical:
+
+:func:`read_csv` supports parsing ``Categorical`` directly
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The :func:`read_csv` function now supports parsing a ``Categorical`` column when
+specified as a dtype (:issue:`10153`). Depending on the structure of the data,
+this can result in a faster parse time and lower memory usage compared to
+converting to ``Categorical`` after parsing. See the io :ref:`docs here <io.categorical>`
+
+.. ipython:: python
+
+ data = 'col1,col2,col3\na,b,1\na,b,2\nc,d,3'
+
+ pd.read_csv(StringIO(data))
+ pd.read_csv(StringIO(data)).dtypes
+ pd.read_csv(StringIO(data), dtype='category').dtypes
+
+Individual columns can be parsed as a ``Categorical`` using a dict specification
+
+.. ipython:: python
+
+ pd.read_csv(StringIO(data), dtype={'col1': 'category'}).dtypes
+
+.. note::
+
+ The resulting categories will always be parsed as strings (object dtype).
+ If the categories are numeric they can be converted using the
+ :func:`to_numeric` function, or as appropriate, another converter
+ such as :func:`to_datetime`.
+
+ .. ipython:: python
+
+ df = pd.read_csv(StringIO(data), dtype='category')
+ df.dtypes
+ df['col3']
+ df['col3'].cat.categories = pd.to_numeric(df['col3'].cat.categories)
+ df['col3']
+
.. _whatsnew_0190.enhancements.semi_month_offsets:
Semi-Month Offsets
diff --git a/pandas/io/tests/parser/c_parser_only.py b/pandas/io/tests/parser/c_parser_only.py
index 103c9fa2b7ce8..4cea9e1d6b595 100644
--- a/pandas/io/tests/parser/c_parser_only.py
+++ b/pandas/io/tests/parser/c_parser_only.py
@@ -12,9 +12,10 @@
import pandas as pd
import pandas.util.testing as tm
-from pandas import DataFrame, Series, Index, MultiIndex
+from pandas import DataFrame, Series, Index, MultiIndex, Categorical
from pandas import compat
from pandas.compat import StringIO, range, lrange
+from pandas.types.dtypes import CategoricalDtype
class CParserTests(object):
@@ -135,6 +136,11 @@ def test_passing_dtype(self):
dtype={'A': 'timedelta64', 'B': 'float64'},
index_col=0)
+ # valid but unsupported - fixed width unicode string
+ self.assertRaises(TypeError, self.read_csv, path,
+ dtype={'A': 'U8'},
+ index_col=0)
+
# see gh-12048: empty frame
actual = self.read_csv(StringIO('A,B'), dtype=str)
expected = DataFrame({'A': [], 'B': []}, index=[], dtype=str)
@@ -184,6 +190,92 @@ def test_pass_dtype(self):
self.assertEqual(result['one'].dtype, 'u1')
self.assertEqual(result['two'].dtype, 'object')
+ def test_categorical_dtype(self):
+ # GH 10153
+ data = """a,b,c
+1,a,3.4
+1,a,3.4
+2,b,4.5"""
+ expected = pd.DataFrame({'a': Categorical(['1', '1', '2']),
+ 'b': Categorical(['a', 'a', 'b']),
+ 'c': Categorical(['3.4', '3.4', '4.5'])})
+ actual = self.read_csv(StringIO(data), dtype='category')
+ tm.assert_frame_equal(actual, expected)
+
+ actual = self.read_csv(StringIO(data), dtype=CategoricalDtype())
+ tm.assert_frame_equal(actual, expected)
+
+ actual = self.read_csv(StringIO(data), dtype={'a': 'category',
+ 'b': 'category',
+ 'c': CategoricalDtype()})
+ tm.assert_frame_equal(actual, expected)
+
+ actual = self.read_csv(StringIO(data), dtype={'b': 'category'})
+ expected = pd.DataFrame({'a': [1, 1, 2],
+ 'b': Categorical(['a', 'a', 'b']),
+ 'c': [3.4, 3.4, 4.5]})
+ tm.assert_frame_equal(actual, expected)
+
+ actual = self.read_csv(StringIO(data), dtype={1: 'category'})
+ tm.assert_frame_equal(actual, expected)
+
+ # unsorted
+ data = """a,b,c
+1,b,3.4
+1,b,3.4
+2,a,4.5"""
+ expected = pd.DataFrame({'a': Categorical(['1', '1', '2']),
+ 'b': Categorical(['b', 'b', 'a']),
+ 'c': Categorical(['3.4', '3.4', '4.5'])})
+ actual = self.read_csv(StringIO(data), dtype='category')
+ tm.assert_frame_equal(actual, expected)
+
+ # missing
+ data = """a,b,c
+1,b,3.4
+1,nan,3.4
+2,a,4.5"""
+ expected = pd.DataFrame({'a': Categorical(['1', '1', '2']),
+ 'b': Categorical(['b', np.nan, 'a']),
+ 'c': Categorical(['3.4', '3.4', '4.5'])})
+ actual = self.read_csv(StringIO(data), dtype='category')
+ tm.assert_frame_equal(actual, expected)
+
+ def test_categorical_dtype_encoding(self):
+ # GH 10153
+ pth = tm.get_data_path('unicode_series.csv')
+ encoding = 'latin-1'
+ expected = self.read_csv(pth, header=None, encoding=encoding)
+ expected[1] = Categorical(expected[1])
+ actual = self.read_csv(pth, header=None, encoding=encoding,
+ dtype={1: 'category'})
+ tm.assert_frame_equal(actual, expected)
+
+ pth = tm.get_data_path('utf16_ex.txt')
+ encoding = 'utf-16'
+ expected = self.read_table(pth, encoding=encoding)
+ expected = expected.apply(Categorical)
+ actual = self.read_table(pth, encoding=encoding, dtype='category')
+ tm.assert_frame_equal(actual, expected)
+
+ def test_categorical_dtype_chunksize(self):
+ # GH 10153
+ data = """a,b
+1,a
+1,b
+1,b
+2,c"""
+ expecteds = [pd.DataFrame({'a': [1, 1],
+ 'b': Categorical(['a', 'b'])}),
+ pd.DataFrame({'a': [1, 2],
+ 'b': Categorical(['b', 'c'])},
+ index=[2, 3])]
+ actuals = self.read_csv(StringIO(data), dtype={'b': 'category'},
+ chunksize=2)
+
+ for actual, expected in zip(actuals, expecteds):
+ tm.assert_frame_equal(actual, expected)
+
def test_pass_dtype_as_recarray(self):
if compat.is_platform_windows() and self.low_memory:
raise nose.SkipTest(
diff --git a/pandas/parser.pyx b/pandas/parser.pyx
index e72e2f90a5213..5af82be5b741b 100644
--- a/pandas/parser.pyx
+++ b/pandas/parser.pyx
@@ -25,6 +25,7 @@ cdef extern from "Python.h":
cdef extern from "stdlib.h":
void memcpy(void *dst, void *src, size_t n)
+cimport cython
cimport numpy as cnp
from numpy cimport ndarray, uint8_t, uint64_t
@@ -33,6 +34,15 @@ import numpy as np
cimport util
import pandas.lib as lib
+from pandas.types.common import (is_categorical_dtype, CategoricalDtype,
+ is_integer_dtype, is_float_dtype,
+ is_bool_dtype, is_object_dtype,
+ is_string_dtype, is_datetime64_dtype,
+ pandas_dtype)
+from pandas.core.categorical import Categorical
+from pandas.core.algorithms import take_1d
+from pandas.types.concat import union_categoricals
+from pandas import Index
import time
import os
@@ -399,11 +409,12 @@ cdef class TextReader:
self._set_quoting(quotechar, quoting)
- # TODO: endianness just a placeholder?
+
+ dtype_order = ['int64', 'float64', 'bool', 'object']
if quoting == QUOTE_NONNUMERIC:
- self.dtype_cast_order = ['<f8', '<i8', '|b1', '|O8']
- else:
- self.dtype_cast_order = ['<i8', '<f8', '|b1', '|O8']
+ # consistent with csv module semantics, cast all to float
+ dtype_order = dtype_order[1:]
+ self.dtype_cast_order = [np.dtype(x) for x in dtype_order]
if comment is not None:
if len(comment) > 1:
@@ -472,15 +483,10 @@ cdef class TextReader:
self.encoding = encoding
if isinstance(dtype, dict):
- conv = {}
- for k in dtype:
- v = dtype[k]
- if isinstance(v, basestring):
- v = np.dtype(v)
- conv[k] = v
- dtype = conv
+ dtype = {k: pandas_dtype(dtype[k])
+ for k in dtype}
elif dtype is not None:
- dtype = np.dtype(dtype)
+ dtype = pandas_dtype(dtype)
self.dtype = dtype
@@ -689,6 +695,7 @@ cdef class TextReader:
int status
Py_ssize_t size
char *errors = "strict"
+ cdef StringPath path = _string_path(self.c_encoding)
header = []
@@ -718,20 +725,18 @@ cdef class TextReader:
field_count = self.parser.line_fields[hr]
start = self.parser.line_start[hr]
- # TODO: Py3 vs. Py2
counts = {}
unnamed_count = 0
for i in range(field_count):
word = self.parser.words[start + i]
- if self.c_encoding == NULL and not PY3:
+ if path == CSTRING:
name = PyBytes_FromString(word)
- else:
- if self.c_encoding == NULL or self.c_encoding == b'utf-8':
- name = PyUnicode_FromString(word)
- else:
- name = PyUnicode_Decode(word, strlen(word),
- self.c_encoding, errors)
+ elif path == UTF8:
+ name = PyUnicode_FromString(word)
+ elif path == ENCODED:
+ name = PyUnicode_Decode(word, strlen(word),
+ self.c_encoding, errors)
if name == '':
if self.has_mi_columns:
@@ -1076,17 +1081,12 @@ cdef class TextReader:
col_dtype = self.dtype[i]
else:
if self.dtype.names:
- col_dtype = self.dtype.descr[i][1]
+ # structured array
+ col_dtype = np.dtype(self.dtype.descr[i][1])
else:
col_dtype = self.dtype
if col_dtype is not None:
- if not isinstance(col_dtype, basestring):
- if isinstance(col_dtype, np.dtype):
- col_dtype = col_dtype.str
- else:
- col_dtype = np.dtype(col_dtype).str
-
col_res, na_count = self._convert_with_dtype(col_dtype, i, start, end,
na_filter, 1, na_hashset, na_flist)
@@ -1104,7 +1104,7 @@ cdef class TextReader:
dt, i, start, end, na_filter, 0, na_hashset, na_flist)
except OverflowError:
col_res, na_count = self._convert_with_dtype(
- '|O8', i, start, end, na_filter, 0, na_hashset, na_flist)
+ np.dtype('object'), i, start, end, na_filter, 0, na_hashset, na_flist)
if col_res is not None:
break
@@ -1136,90 +1136,88 @@ cdef class TextReader:
bint user_dtype,
kh_str_t *na_hashset,
object na_flist):
- if dtype[1] == 'i' or dtype[1] == 'u':
- result, na_count = _try_int64(self.parser, i, start, end,
- na_filter, na_hashset)
+ if is_integer_dtype(dtype):
+ result, na_count = _try_int64(self.parser, i, start, end, na_filter,
+ na_hashset)
if user_dtype and na_count is not None:
if na_count > 0:
raise ValueError("Integer column has NA values in "
- "column {column}".format(column=i))
+ "column {column}".format(column=i))
- if result is not None and dtype[1:] != 'i8':
+ if result is not None and dtype != 'int64':
result = result.astype(dtype)
return result, na_count
- elif dtype[1] == 'f':
+ elif is_float_dtype(dtype):
result, na_count = _try_double(self.parser, i, start, end,
na_filter, na_hashset, na_flist)
- if result is not None and dtype[1:] != 'f8':
+ if result is not None and dtype != 'float64':
result = result.astype(dtype)
return result, na_count
- elif dtype[1] == 'b':
+ elif is_bool_dtype(dtype):
result, na_count = _try_bool_flex(self.parser, i, start, end,
na_filter, na_hashset,
self.true_set, self.false_set)
return result, na_count
- elif dtype[1] == 'c':
- raise NotImplementedError("the dtype %s is not supported for parsing" % dtype)
-
- elif dtype[1] == 'S':
+ elif dtype.kind == 'S':
# TODO: na handling
- width = int(dtype[2:])
+ width = dtype.itemsize
if width > 0:
result = _to_fw_string(self.parser, i, start, end, width)
return result, 0
# treat as a regular string parsing
return self._string_convert(i, start, end, na_filter,
- na_hashset)
- elif dtype[1] == 'U':
- width = int(dtype[2:])
+ na_hashset)
+ elif dtype.kind == 'U':
+ width = dtype.itemsize
if width > 0:
- raise NotImplementedError("the dtype %s is not supported for parsing" % dtype)
+ raise TypeError("the dtype %s is not supported for parsing" % dtype)
# unicode variable width
return self._string_convert(i, start, end, na_filter,
na_hashset)
-
-
- elif dtype[1] == 'O':
+ elif is_categorical_dtype(dtype):
+ codes, cats, na_count = _categorical_convert(self.parser, i, start,
+ end, na_filter, na_hashset,
+ self.c_encoding)
+ # sort categories and recode if necessary
+ cats = Index(cats)
+ if not cats.is_monotonic_increasing:
+ unsorted = cats.copy()
+ cats = cats.sort_values()
+ indexer = cats.get_indexer(unsorted)
+ codes = take_1d(indexer, codes, fill_value=-1)
+
+ return Categorical(codes, categories=cats, ordered=False,
+ fastpath=True), na_count
+ elif is_object_dtype(dtype):
return self._string_convert(i, start, end, na_filter,
na_hashset)
+ elif is_datetime64_dtype(dtype):
+ raise TypeError("the dtype %s is not supported for parsing, "
+ "pass this column using parse_dates instead" % dtype)
else:
- if dtype[1] == 'M':
- raise TypeError("the dtype %s is not supported for parsing, "
- "pass this column using parse_dates instead" % dtype)
raise TypeError("the dtype %s is not supported for parsing" % dtype)
cdef _string_convert(self, Py_ssize_t i, int start, int end,
bint na_filter, kh_str_t *na_hashset):
- if PY3:
- if self.c_encoding != NULL:
- if self.c_encoding == b"utf-8":
- return _string_box_utf8(self.parser, i, start, end,
- na_filter, na_hashset)
- else:
- return _string_box_decode(self.parser, i, start, end,
- na_filter, na_hashset,
- self.c_encoding)
- else:
- return _string_box_utf8(self.parser, i, start, end,
- na_filter, na_hashset)
- else:
- if self.c_encoding != NULL:
- if self.c_encoding == b"utf-8":
- return _string_box_utf8(self.parser, i, start, end,
- na_filter, na_hashset)
- else:
- return _string_box_decode(self.parser, i, start, end,
- na_filter, na_hashset,
- self.c_encoding)
- else:
- return _string_box_factorize(self.parser, i, start, end,
- na_filter, na_hashset)
+
+ cdef StringPath path = _string_path(self.c_encoding)
+
+ if path == UTF8:
+ return _string_box_utf8(self.parser, i, start, end, na_filter,
+ na_hashset)
+ elif path == ENCODED:
+ return _string_box_decode(self.parser, i, start, end,
+ na_filter, na_hashset, self.c_encoding)
+ elif path == CSTRING:
+ return _string_box_factorize(self.parser, i, start, end,
+ na_filter, na_hashset)
+
def _get_converter(self, i, name):
if self.converters is None:
@@ -1331,6 +1329,19 @@ def _maybe_upcast(arr):
return arr
+cdef enum StringPath:
+ CSTRING
+ UTF8
+ ENCODED
+
+# factored out logic to pick string converter
+cdef inline StringPath _string_path(char *encoding):
+ if encoding != NULL and encoding != b"utf-8":
+ return ENCODED
+ elif PY3 or encoding != NULL:
+ return UTF8
+ else:
+ return CSTRING
# ----------------------------------------------------------------------
# Type conversions / inference support code
@@ -1500,6 +1511,77 @@ cdef _string_box_decode(parser_t *parser, int col,
return result, na_count
[email protected](False)
+cdef _categorical_convert(parser_t *parser, int col,
+ int line_start, int line_end,
+ bint na_filter, kh_str_t *na_hashset,
+ char *encoding):
+ "Convert column data into codes, categories"
+ cdef:
+ int error, na_count = 0
+ Py_ssize_t i, size
+ size_t lines
+ coliter_t it
+ const char *word = NULL
+
+ int64_t NA = -1
+ int64_t[:] codes
+ int64_t current_category = 0
+
+ char *errors = "strict"
+ cdef StringPath path = _string_path(encoding)
+
+ int ret = 0
+ kh_str_t *table
+ khiter_t k
+
+ lines = line_end - line_start
+ codes = np.empty(lines, dtype=np.int64)
+
+ # factorize parsed values, creating a hash table
+ # bytes -> category code
+ with nogil:
+ table = kh_init_str()
+ coliter_setup(&it, parser, col, line_start)
+
+ for i in range(lines):
+ COLITER_NEXT(it, word)
+
+ if na_filter:
+ k = kh_get_str(na_hashset, word)
+ # is in NA values
+ if k != na_hashset.n_buckets:
+ na_count += 1
+ codes[i] = NA
+ continue
+
+ k = kh_get_str(table, word)
+ # not in the hash table
+ if k == table.n_buckets:
+ k = kh_put_str(table, word, &ret)
+ table.vals[k] = current_category
+ current_category += 1
+
+ codes[i] = table.vals[k]
+
+ # parse and box categories to python strings
+ result = np.empty(table.n_occupied, dtype=np.object_)
+ if path == ENCODED:
+ for k in range(table.n_buckets):
+ if kh_exist_str(table, k):
+ size = strlen(table.keys[k])
+ result[table.vals[k]] = PyUnicode_Decode(table.keys[k], size, encoding, errors)
+ elif path == UTF8:
+ for k in range(table.n_buckets):
+ if kh_exist_str(table, k):
+ result[table.vals[k]] = PyUnicode_FromString(table.keys[k])
+ elif path == CSTRING:
+ for k in range(table.n_buckets):
+ if kh_exist_str(table, k):
+ result[table.vals[k]] = PyBytes_FromString(table.keys[k])
+
+ kh_destroy_str(table)
+ return np.asarray(codes), result, na_count
cdef _to_fw_string(parser_t *parser, int col, int line_start,
int line_end, size_t width):
@@ -1719,6 +1801,7 @@ cdef inline int _try_bool_nogil(parser_t *parser, int col, int line_start, int l
const char *word = NULL
khiter_t k
na_count[0] = 0
+
coliter_setup(&it, parser, col, line_start)
if na_filter:
@@ -1836,6 +1919,7 @@ cdef inline int _try_bool_flex_nogil(parser_t *parser, int col, int line_start,
return 0
+
cdef kh_str_t* kset_from_list(list values) except NULL:
# caller takes responsibility for freeing the hash table
cdef:
@@ -1924,7 +2008,11 @@ def _concatenate_chunks(list chunks):
common_type = np.find_common_type(dtypes, [])
if common_type == np.object:
warning_columns.append(str(name))
- result[name] = np.concatenate(arrs)
+
+ if is_categorical_dtype(dtypes.pop()):
+ result[name] = union_categoricals(arrs, sort_categories=True)
+ else:
+ result[name] = np.concatenate(arrs)
if warning_columns:
warning_names = ','.join(warning_columns)
diff --git a/pandas/tools/tests/test_concat.py b/pandas/tools/tests/test_concat.py
index 225ba533161b3..e3cc60e2856c2 100644
--- a/pandas/tools/tests/test_concat.py
+++ b/pandas/tools/tests/test_concat.py
@@ -850,6 +850,9 @@ def test_union_categorical(self):
([0, 1, 2], [2, 3, 4], [0, 1, 2, 2, 3, 4]),
([0, 1.2, 2], [2, 3.4, 4], [0, 1.2, 2, 2, 3.4, 4]),
+ (['b', 'b', np.nan, 'a'], ['a', np.nan, 'c'],
+ ['b', 'b', np.nan, 'a', 'a', np.nan, 'c']),
+
(pd.date_range('2014-01-01', '2014-01-05'),
pd.date_range('2014-01-06', '2014-01-07'),
pd.date_range('2014-01-01', '2014-01-07')),
| Closes #10153 (at least partly)
Adds the ability to directly parse a `Categorical` through the `dtype` parameter to `read_csv`. Currently just uses whatever is there as the categories, a possible enhancement would be to allow and enforce user-specified categories, through not quite sure what the api would be.
This only parses string categories - originally I had an impl that did type inference on the categories, but it added a lot of complication without much benefit, now the recommendation in the docs is to convert after parsing.
Here's an example timing. For reasonably sparse data, a slightly worse than 2x speedup is what I'm typically seeing, along with much better memory usage.
``` python
group1 = ['aaaaa', 'bbbbb', 'cccccc', 'ddddddd', 'eeeeeeee']
df = pd.DataFrame({'a': np.random.choice(group1, 10000000).astype('object'),
'b': np.random.choice(group1, 10000000).astype('object'),
'c': np.random.choice(group1, 10000000).astype('object')})
df.to_csv('strings.csv', index=False)
In [14]: %timeit pd.read_csv('strings.csv').apply(pd.Categorical)
1 loops, best of 3: 6.66 s per loop
In [13]: %timeit pd.read_csv('strings.csv', dtype='category')
1 loop, best of 3: 3.68 s per loop
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/13406 | 2016-06-08T23:31:14Z | 2016-08-06T22:53:22Z | null | 2016-08-06T22:54:20Z |
ENH: implement 'reverse' for grouping | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 0852c5a293f4e..d9edc9186ccc5 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3765,7 +3765,7 @@ def clip_lower(self, threshold, axis=None):
return self.where(subset, threshold, axis=axis)
def groupby(self, by=None, axis=0, level=None, as_index=True, sort=True,
- group_keys=True, squeeze=False, **kwargs):
+ group_keys=True, squeeze=False, reverse=False, **kwargs):
"""
Group series using mapper (dict or key function, apply given function
to group, return result as series) or by a series of columns.
@@ -3794,6 +3794,8 @@ def groupby(self, by=None, axis=0, level=None, as_index=True, sort=True,
squeeze : boolean, default False
reduce the dimensionality of the return type if possible,
otherwise return a consistent type
+ reverse : boolean, default False
+ invert the selection criteria
Examples
--------
@@ -3818,6 +3820,7 @@ def groupby(self, by=None, axis=0, level=None, as_index=True, sort=True,
axis = self._get_axis_number(axis)
return groupby(self, by=by, axis=axis, level=level, as_index=as_index,
sort=sort, group_keys=group_keys, squeeze=squeeze,
+ reverse=reverse,
**kwargs)
def asfreq(self, freq, method=None, how=None, normalize=False):
@@ -5058,8 +5061,8 @@ def pct_change(self, periods=1, fill_method='pad', limit=None, freq=None,
np.putmask(rs.values, mask, np.nan)
return rs
- def _agg_by_level(self, name, axis=0, level=0, skipna=True, **kwargs):
- grouped = self.groupby(level=level, axis=axis)
+ def _agg_by_level(self, name, axis=0, level=0, skipna=True, reverse=False, **kwargs):
+ grouped = self.groupby(level=level, axis=axis, reverse=reverse)
if hasattr(grouped, name) and skipna:
return getattr(grouped, name)(**kwargs)
axis = self._get_axis_number(axis)
@@ -5341,6 +5344,7 @@ def _make_stat_function(cls, name, name1, name2, axis_descr, desc, f):
axis_descr=axis_descr)
@Appender(_num_doc)
def stat_func(self, axis=None, skipna=None, level=None, numeric_only=None,
+ reverse_level=False,
**kwargs):
nv.validate_stat_func(tuple(), kwargs, fname=name)
if skipna is None:
@@ -5349,6 +5353,7 @@ def stat_func(self, axis=None, skipna=None, level=None, numeric_only=None,
axis = self._stat_axis_number
if level is not None:
return self._agg_by_level(name, axis=axis, level=level,
+ reverse=reverse_level,
skipna=skipna)
return self._reduce(f, name, axis=axis, skipna=skipna,
numeric_only=numeric_only)
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index bea62e98e4a2a..419def6937c0b 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -325,7 +325,21 @@ class _GroupBy(PandasObject, SelectionMixin):
def __init__(self, obj, keys=None, axis=0, level=None,
grouper=None, exclusions=None, selection=None, as_index=True,
- sort=True, group_keys=True, squeeze=False, **kwargs):
+ sort=True, group_keys=True, squeeze=False, reverse=False, **kwargs):
+
+ if reverse:
+ if not (isinstance(obj, DataFrame) or isinstance(obj, Series)):
+ raise NotImplementedError(
+ "reverse not implemented for type: {}".format(type(obj)))
+ if axis != 0 or as_index == False or any(v is not None for v in
+ (grouper, exclusions, selection)):
+ raise NotImplementedError("reverse not implemented for provided args")
+ if level is not None and keys is None:
+ level = [l for l in obj.index.names if l not in level]
+ elif keys is not None and level is None:
+ keys = [k for k in obj.columns if k not in keys]
+ else:
+ raise NotImplementedError("Unknown behavior when keys and level are provided with 'reverse' set")
self._selection = selection
| when there are lots of columns of index levels it is more convenient to specify the column or index level that should be merged rather than all of the other columns or index levels
for example a multiindex dataframe
```
df.mean(level=['year', 'month', 'day', 'hour', 'min', 'trial'])
```
can be replaced with
```
df.mean(level='rep', reverse_level=True)
```
or for a flat index dataframe (where the data is in a column named 'result')
```
df.groupby(['year', 'month', 'day', 'hour', 'min', 'trial']).mean()
```
can be replaced with
```
df.groupby(['rep', 'result'], reverse=True)
```
http://stackoverflow.com/questions/16808682/is-there-in-pandas-operation-complementary-opposite-to-groupby
| https://api.github.com/repos/pandas-dev/pandas/pulls/13405 | 2016-06-08T22:39:31Z | 2016-06-08T23:54:15Z | null | 2023-05-11T01:13:40Z |
BUG: don't raise on empty usecols | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 8b80901774828..105194e504f45 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -358,6 +358,7 @@ Bug Fixes
- Bug in ``pd.read_csv()`` with ``engine='python'`` in which infinities of mixed-case forms were not being interpreted properly (:issue:`13274`)
- Bug in ``pd.read_csv()`` with ``engine='python'`` in which trailing ``NaN`` values were not being parsed (:issue:`13320`)
- Bug in ``pd.read_csv()`` that prevents ``usecols`` kwarg from accepting single-byte unicode strings (:issue:`13219`)
+- Bug in ``pd.read_csv()`` that prevents ``usecols`` from being an empty set (:issue:`13402`)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 4e954979f7d08..475eb73812666 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -944,7 +944,8 @@ def _validate_usecols_arg(usecols):
if usecols is not None:
usecols_dtype = lib.infer_dtype(usecols)
- if usecols_dtype not in ('integer', 'string', 'unicode'):
+ if usecols_dtype not in ('empty', 'integer',
+ 'string', 'unicode'):
raise ValueError(msg)
return usecols
diff --git a/pandas/io/tests/parser/usecols.py b/pandas/io/tests/parser/usecols.py
index 0d3ae95f0d1d4..8e34018df279b 100644
--- a/pandas/io/tests/parser/usecols.py
+++ b/pandas/io/tests/parser/usecols.py
@@ -354,3 +354,10 @@ def test_usecols_with_multibyte_unicode_characters(self):
df = self.read_csv(StringIO(s), usecols=[u'あああ', u'いい'])
tm.assert_frame_equal(df, expected)
+
+ def test_empty_usecols(self):
+ # should not raise
+ data = 'a,b,c\n1,2,3\n4,5,6'
+ expected = DataFrame()
+ result = self.read_csv(StringIO(data), usecols=set([]))
+ tm.assert_frame_equal(result, expected)
| Title is self-explanatory.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13402 | 2016-06-08T16:07:13Z | 2016-06-09T11:38:49Z | null | 2016-06-09T12:10:21Z |
BUG: Fix for .extractall (single group with quantifier) #13382 | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 1e95af2df247b..d5998aba6001b 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -335,6 +335,7 @@ Bug Fixes
- Bug in ``SeriesGroupBy.transform`` with datetime values and missing groups (:issue:`13191`)
- Bug in ``Series.str.extractall()`` with ``str`` index raises ``ValueError`` (:issue:`13156`)
+- Bug in ``Series.str.extractall()`` with single group and quantifier (:issue:`13382`)
- Bug in ``PeriodIndex`` and ``Period`` subtraction raises ``AttributeError`` (:issue:`13071`)
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 5b1b8bd05af42..2f9f8ec936e78 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -708,6 +708,8 @@ def str_extractall(arr, pat, flags=0):
subject_key = (subject_key, )
for match_i, match_tuple in enumerate(regex.findall(subject)):
+ if isinstance(match_tuple, compat.string_types):
+ match_tuple = (match_tuple,)
na_tuple = [np.NaN if group == "" else group
for group in match_tuple]
match_list.append(na_tuple)
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index 3d1851966afd0..73f9809a7f042 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -977,6 +977,20 @@ def test_extractall_single_group(self):
e = DataFrame(['a', 'b', 'd', 'c'], i)
tm.assert_frame_equal(r, e)
+ def test_extractall_single_group_with_quantifier(self):
+ # extractall(one un-named group with quantifier) returns
+ # DataFrame with one un-named column (GH13382).
+ s = Series(['ab3', 'abc3', 'd4cd2'], name='series_name')
+ r = s.str.extractall(r'([a-z]+)')
+ i = MultiIndex.from_tuples([
+ (0, 0),
+ (1, 0),
+ (2, 0),
+ (2, 1),
+ ], names=(None, "match"))
+ e = DataFrame(['ab', 'abc', 'd', 'cd'], i)
+ tm.assert_frame_equal(r, e)
+
def test_extractall_no_matches(self):
s = Series(['a3', 'b3', 'd4c2'], name='series_name')
# one un-named group.
| - [X] closes #13382
- [X] tests added / passed
- [X] passes `git diff upstream/master | flake8 --diff`
- [X] whatsnew entry
Note that I had to fix it based on this [thread](http://stackoverflow.com/questions/2111759/whats-the-best-practice-for-handling-single-value-tuples-in-python), rather than directly with `[x for x in ['ab']]` as this broke previous tests.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13397 | 2016-06-08T12:35:01Z | 2016-06-08T15:27:07Z | 2016-06-08T15:27:07Z | 2016-06-08T15:48:40Z |
BUG: Fix groupby with "as_index" for categorical multi #13204 | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index eae03b2a86661..be1f745537d05 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -527,3 +527,4 @@ Bug Fixes
- Bug in ``Categorical.remove_unused_categories()`` changes ``.codes`` dtype to platform int (:issue:`13261`)
+- Bug in ``groupby`` with ``as_index=False`` returns all NaN's when grouping on multiple columns including a categorical one (:issue:`13204`)
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index f6915e962c049..04e4db9d1fdc6 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -2250,7 +2250,7 @@ def __init__(self, index, grouper=None, obj=None, name=None, level=None,
self.grouper = to_timedelta(self.grouper)
def __repr__(self):
- return 'Grouping(%s)' % self.name
+ return 'Grouping({0})'.format(self.name)
def __iter__(self):
return iter(self.indices)
@@ -3741,9 +3741,39 @@ def _reindex_output(self, result):
return result
levels_list = [ping.group_index for ping in groupings]
- index = MultiIndex.from_product(levels_list, names=self.grouper.names)
- d = {self.obj._get_axis_name(self.axis): index, 'copy': False}
- return result.reindex(**d).sortlevel(axis=self.axis)
+ index, _ = MultiIndex.from_product(
+ levels_list, names=self.grouper.names).sortlevel()
+
+ if self.as_index:
+ d = {self.obj._get_axis_name(self.axis): index, 'copy': False}
+ return result.reindex(**d)
+
+ # GH 13204
+ # Here, the categorical in-axis groupers, which need to be fully
+ # expanded, are columns in `result`. An idea is to do:
+ # result = result.set_index(self.grouper.names)
+ # .reindex(index).reset_index()
+ # but special care has to be taken because of possible not-in-axis
+ # groupers.
+ # So, we manually select and drop the in-axis grouper columns,
+ # reindex `result`, and then reset the in-axis grouper columns.
+
+ # Select in-axis groupers
+ in_axis_grps = [(i, ping.name) for (i, ping)
+ in enumerate(groupings) if ping.in_axis]
+ g_nums, g_names = zip(*in_axis_grps)
+
+ result = result.drop(labels=list(g_names), axis=1)
+
+ # Set a temp index and reindex (possibly expanding)
+ result = result.set_index(self.grouper.result_index
+ ).reindex(index, copy=False)
+
+ # Reset in-axis grouper columns
+ # (using level numbers `g_nums` because level names may not be unique)
+ result = result.reset_index(level=g_nums)
+
+ return result.reset_index(drop=True)
def _iterate_column_groupbys(self):
for i, colname in enumerate(self._selected_obj.columns):
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index 6659e6b106a67..bc25525f936ac 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -6304,6 +6304,47 @@ def test_groupby_categorical_two_columns(self):
nan, nan, nan, nan, 200, 34]}, index=idx)
tm.assert_frame_equal(res, exp)
+ def test_groupby_multi_categorical_as_index(self):
+ # GH13204
+ df = DataFrame({'cat': Categorical([1, 2, 2], [1, 2, 3]),
+ 'A': [10, 11, 11],
+ 'B': [101, 102, 103]})
+ result = df.groupby(['cat', 'A'], as_index=False).sum()
+ expected = DataFrame({'cat': [1, 1, 2, 2, 3, 3],
+ 'A': [10, 11, 10, 11, 10, 11],
+ 'B': [101.0, nan, nan, 205.0, nan, nan]},
+ columns=['cat', 'A', 'B'])
+ tm.assert_frame_equal(result, expected)
+
+ # function grouper
+ f = lambda r: df.loc[r, 'A']
+ result = df.groupby(['cat', f], as_index=False).sum()
+ expected = DataFrame({'cat': [1, 1, 2, 2, 3, 3],
+ 'A': [10.0, nan, nan, 22.0, nan, nan],
+ 'B': [101.0, nan, nan, 205.0, nan, nan]},
+ columns=['cat', 'A', 'B'])
+ tm.assert_frame_equal(result, expected)
+
+ # another not in-axis grouper (conflicting names in index)
+ s = Series(['a', 'b', 'b'], name='cat')
+ result = df.groupby(['cat', s], as_index=False).sum()
+ expected = DataFrame({'cat': [1, 1, 2, 2, 3, 3],
+ 'A': [10.0, nan, nan, 22.0, nan, nan],
+ 'B': [101.0, nan, nan, 205.0, nan, nan]},
+ columns=['cat', 'A', 'B'])
+ tm.assert_frame_equal(result, expected)
+
+ # is original index dropped?
+ expected = DataFrame({'cat': [1, 1, 2, 2, 3, 3],
+ 'A': [10, 11, 10, 11, 10, 11],
+ 'B': [101.0, nan, nan, 205.0, nan, nan]},
+ columns=['cat', 'A', 'B'])
+
+ for name in [None, 'X', 'B', 'cat']:
+ df.index = Index(list("abc"), name=name)
+ result = df.groupby(['cat', 'A'], as_index=False).sum()
+ tm.assert_frame_equal(result, expected, check_index_type=True)
+
def test_groupby_apply_all_none(self):
# Tests to make sure no errors if apply function returns all None
# values. Issue 9684.
@@ -6431,6 +6472,16 @@ def test_numpy_compat(self):
tm.assertRaisesRegexp(UnsupportedFunctionCall, msg,
getattr(g, func), foo=1)
+ def test_grouping_string_repr(self):
+ # GH 13394
+ mi = MultiIndex.from_arrays([list("AAB"), list("aba")])
+ df = DataFrame([[1, 2, 3]], columns=mi)
+ gr = df.groupby(df[('A', 'a')])
+
+ result = gr.grouper.groupings[0].__repr__()
+ expected = "Grouping(('A', 'a'))"
+ tm.assert_equal(result, expected)
+
def assert_fp_equal(a, b):
assert (np.abs(a - b) < 1e-12).all()
| - [x] closes #13204
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
Fixes a bug that returns all nan's for groupby(as_index=False) with
multiple column groupers containing a categorical one (#13204).
---
Also:
fixes an internal bug in the string representation of `Grouping`.
``` python
mi = pd.MultiIndex.from_arrays([list("AAB"), list("aba")])
df = pd.DataFrame([[1,2,3]], columns=mi)
gr = df.groupby(df[('A', 'a')])
gr.grouper.groupings
...
TypeError: not all arguments converted during string formatting
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/13394 | 2016-06-07T23:50:50Z | 2016-07-03T23:28:31Z | null | 2016-07-11T17:41:32Z |
COMPAT, TST: allow numpy array comparisons with complex dtypes | diff --git a/pandas/core/common.py b/pandas/core/common.py
index d26c59e62de30..28bae362a3411 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -349,7 +349,24 @@ def array_equivalent(left, right, strict_nan=False):
right = right.view('i8')
# NaNs cannot occur otherwise.
- return np.array_equal(left, right)
+ try:
+ return np.array_equal(left, right)
+ except AttributeError:
+ # see gh-13388
+ #
+ # NumPy v1.7.1 has a bug in its array_equal
+ # function that prevents it from correctly
+ # comparing two arrays with complex dtypes.
+ # This bug is corrected in v1.8.0, so remove
+ # this try-except block as soon as we stop
+ # supporting NumPy versions < 1.8.0
+ if not is_dtype_equal(left.dtype, right.dtype):
+ return False
+
+ left = left.tolist()
+ right = right.tolist()
+
+ return left == right
def _iterable_not_string(x):
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index ad43dc1c09ef1..56b1b542d547e 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -832,6 +832,24 @@ def test_is_timedelta():
assert (not com.is_timedelta64_ns_dtype(tdi.astype('timedelta64[h]')))
+def test_array_equivalent_compat():
+ # see gh-13388
+ m = np.array([(1, 2), (3, 4)], dtype=[('a', int), ('b', float)])
+ n = np.array([(1, 2), (3, 4)], dtype=[('a', int), ('b', float)])
+ assert (com.array_equivalent(m, n, strict_nan=True))
+ assert (com.array_equivalent(m, n, strict_nan=False))
+
+ m = np.array([(1, 2), (3, 4)], dtype=[('a', int), ('b', float)])
+ n = np.array([(1, 2), (4, 3)], dtype=[('a', int), ('b', float)])
+ assert (not com.array_equivalent(m, n, strict_nan=True))
+ assert (not com.array_equivalent(m, n, strict_nan=False))
+
+ m = np.array([(1, 2), (3, 4)], dtype=[('a', int), ('b', float)])
+ n = np.array([(1, 2), (3, 4)], dtype=[('b', int), ('a', float)])
+ assert (not com.array_equivalent(m, n, strict_nan=True))
+ assert (not com.array_equivalent(m, n, strict_nan=False))
+
+
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
| Traces back to bug in NumPy `v1.7.1` in which the `array_equivalent` method could not compare NumPy arrays with complicated dtypes. As `pandas` relies on this function to check NumPy array equality during testing, this commit adds a fallback method for doing so.
Closes #13388.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13392 | 2016-06-07T19:41:20Z | 2016-06-07T22:18:39Z | 2016-06-07T22:18:39Z | 2016-06-07T22:19:11Z |
API: Deprecate skip_footer in read_csv | diff --git a/doc/source/io.rst b/doc/source/io.rst
index e3b03b5a39b37..ee5734aaf9494 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -175,6 +175,8 @@ skiprows : list-like or integer, default ``None``
of the file.
skipfooter : int, default ``0``
Number of lines at bottom of file to skip (unsupported with engine='c').
+skip_footer : int, default ``0``
+ DEPRECATED: use the ``skipfooter`` parameter instead, as they are identical
nrows : int, default ``None``
Number of rows of file to read. Useful for reading pieces of large files.
low_memory : boolean, default ``True``
@@ -1411,7 +1413,7 @@ back to python if C-unsupported options are specified. Currently, C-unsupported
options include:
- ``sep`` other than a single character (e.g. regex separators)
-- ``skip_footer``
+- ``skipfooter``
- ``sep=None`` with ``delim_whitespace=False``
Specifying any of the above options will produce a ``ParserWarning`` unless the
diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt
index 11d2fab464d1f..ed6f9b7303cd3 100644
--- a/doc/source/whatsnew/v0.19.0.txt
+++ b/doc/source/whatsnew/v0.19.0.txt
@@ -612,6 +612,7 @@ Deprecations
- ``compact_ints`` and ``use_unsigned`` have been deprecated in ``pd.read_csv()`` and will be removed in a future version (:issue:`13320`)
- ``buffer_lines`` has been deprecated in ``pd.read_csv()`` and will be removed in a future version (:issue:`13360`)
- ``as_recarray`` has been deprecated in ``pd.read_csv()`` and will be removed in a future version (:issue:`13373`)
+- ``skip_footer`` has been deprecated in ``pd.read_csv()`` in favor of ``skipfooter`` and will be removed in a future version (:issue:`13349`)
- top-level ``pd.ordered_merge()`` has been renamed to ``pd.merge_ordered()`` and the original name will be removed in a future version (:issue:`13358`)
- ``Timestamp.offset`` property (and named arg in the constructor), has been deprecated in favor of ``freq`` (:issue:`12160`)
- ``pd.tseries.util.pivot_annual`` is deprecated. Use ``pivot_table`` as alternative, an example is :ref:`here <cookbook.pivot>` (:issue:`736`)
diff --git a/pandas/io/excel.py b/pandas/io/excel.py
index 703cdbeaa7a8f..b415661c99438 100644
--- a/pandas/io/excel.py
+++ b/pandas/io/excel.py
@@ -473,7 +473,7 @@ def _parse_cell(cell_contents, cell_typ):
parse_dates=parse_dates,
date_parser=date_parser,
skiprows=skiprows,
- skip_footer=skip_footer,
+ skipfooter=skip_footer,
squeeze=squeeze,
**kwds)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index bedf21318aa83..8002cbb9f5f02 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -125,6 +125,8 @@
at the start of the file
skipfooter : int, default 0
Number of lines at bottom of file to skip (Unsupported with engine='c')
+skip_footer : int, default 0
+ DEPRECATED: use the `skipfooter` parameter instead, as they are identical
nrows : int, default None
Number of rows of file to read. Useful for reading pieces of large files
na_values : str or list-like or dict, default None
@@ -341,9 +343,6 @@ def _validate_nrows(nrows):
def _read(filepath_or_buffer, kwds):
"Generic reader of line files."
encoding = kwds.get('encoding', None)
- skipfooter = kwds.pop('skipfooter', None)
- if skipfooter is not None:
- kwds['skip_footer'] = skipfooter
# If the input could be a filename, check for a recognizable compression
# extension. If we're reading from a URL, the `get_filepath_or_buffer`
@@ -411,8 +410,8 @@ def _read(filepath_or_buffer, kwds):
'na_values': None,
'true_values': None,
'false_values': None,
- 'skip_footer': 0,
'converters': None,
+ 'skipfooter': 0,
'keep_default_na': True,
'thousands': None,
@@ -461,7 +460,7 @@ def _read(filepath_or_buffer, kwds):
'widths': None,
}
-_c_unsupported = set(['skip_footer'])
+_c_unsupported = set(['skipfooter'])
_python_unsupported = set([
'low_memory',
'buffer_lines',
@@ -503,7 +502,6 @@ def parser_f(filepath_or_buffer,
false_values=None,
skipinitialspace=False,
skiprows=None,
- skipfooter=None,
nrows=None,
# NA and Missing Data Handling
@@ -541,8 +539,8 @@ def parser_f(filepath_or_buffer,
error_bad_lines=True,
warn_bad_lines=True,
- # Deprecated
- skip_footer=0,
+ skipfooter=0,
+ skip_footer=0, # deprecated
# Internal
doublequote=True,
@@ -570,6 +568,13 @@ def parser_f(filepath_or_buffer,
engine = 'c'
engine_specified = False
+ if skip_footer != 0:
+ warnings.warn("The 'skip_footer' argument has "
+ "been deprecated and will be removed "
+ "in a future version. Please use the "
+ "'skipfooter' argument instead.",
+ FutureWarning, stacklevel=2)
+
kwds = dict(delimiter=delimiter,
engine=engine,
dialect=dialect,
@@ -767,9 +772,9 @@ def _clean_options(self, options, engine):
# C engine not supported yet
if engine == 'c':
- if options['skip_footer'] > 0:
+ if options['skipfooter'] > 0:
fallback_reason = "the 'c' engine does not support"\
- " skip_footer"
+ " skipfooter"
engine = 'python'
if sep is None and not delim_whitespace:
@@ -902,8 +907,8 @@ def _failover_to_python(self):
def read(self, nrows=None):
if nrows is not None:
- if self.options.get('skip_footer'):
- raise ValueError('skip_footer not supported for iteration')
+ if self.options.get('skipfooter'):
+ raise ValueError('skipfooter not supported for iteration')
ret = self._engine.read(nrows)
@@ -1578,7 +1583,7 @@ def TextParser(*args, **kwds):
date_parser : function, default None
skiprows : list of integers
Row numbers to skip
- skip_footer : int
+ skipfooter : int
Number of line at bottom of file to skip
converters : dict, default None
Dict of functions for converting values in certain columns. Keys can
@@ -1691,7 +1696,7 @@ def __init__(self, f, **kwds):
self.memory_map = kwds['memory_map']
self.skiprows = kwds['skiprows']
- self.skip_footer = kwds['skip_footer']
+ self.skipfooter = kwds['skipfooter']
self.delimiter = kwds['delimiter']
self.quotechar = kwds['quotechar']
@@ -2323,7 +2328,7 @@ def _rows_to_cols(self, content):
content, min_width=col_len).T)
zip_len = len(zipped_content)
- if self.skip_footer < 0:
+ if self.skipfooter < 0:
raise ValueError('skip footer cannot be negative')
# Loop through rows to verify lengths are correct.
@@ -2336,8 +2341,8 @@ def _rows_to_cols(self, content):
break
footers = 0
- if self.skip_footer:
- footers = self.skip_footer
+ if self.skipfooter:
+ footers = self.skipfooter
row_num = self.pos - (len(content) - i + footers)
@@ -2423,8 +2428,8 @@ def _get_lines(self, rows=None):
else:
lines = new_rows
- if self.skip_footer:
- lines = lines[:-self.skip_footer]
+ if self.skipfooter:
+ lines = lines[:-self.skipfooter]
lines = self._check_comments(lines)
if self.skip_blank_lines:
diff --git a/pandas/io/tests/parser/common.py b/pandas/io/tests/parser/common.py
index 11eed79e03267..87f69020fa685 100644
--- a/pandas/io/tests/parser/common.py
+++ b/pandas/io/tests/parser/common.py
@@ -218,9 +218,9 @@ def test_malformed(self):
skiprows=[2])
it.read()
- # skip_footer is not supported with the C parser yet
+ # skipfooter is not supported with the C parser yet
if self.engine == 'python':
- # skip_footer
+ # skipfooter
data = """ignore
A,B,C
1,2,3 # comment
@@ -232,7 +232,7 @@ def test_malformed(self):
with tm.assertRaisesRegexp(Exception, msg):
self.read_table(StringIO(data), sep=',',
header=1, comment='#',
- skip_footer=1)
+ skipfooter=1)
def test_quoting(self):
bad_line_small = """printer\tresult\tvariant_name
@@ -524,11 +524,11 @@ def test_iterator(self):
self.assertEqual(len(result), 3)
tm.assert_frame_equal(pd.concat(result), expected)
- # skip_footer is not supported with the C parser yet
+ # skipfooter is not supported with the C parser yet
if self.engine == 'python':
- # test bad parameter (skip_footer)
+ # test bad parameter (skipfooter)
reader = self.read_csv(StringIO(self.data1), index_col=0,
- iterator=True, skip_footer=True)
+ iterator=True, skipfooter=True)
self.assertRaises(ValueError, reader.read, 3)
def test_pass_names_with_index(self):
diff --git a/pandas/io/tests/parser/python_parser_only.py b/pandas/io/tests/parser/python_parser_only.py
index 0408401672a2f..fbf23a23c7d40 100644
--- a/pandas/io/tests/parser/python_parser_only.py
+++ b/pandas/io/tests/parser/python_parser_only.py
@@ -98,7 +98,7 @@ def test_single_line(self):
finally:
sys.stdout = sys.__stdout__
- def test_skip_footer(self):
+ def test_skipfooter(self):
# see gh-6607
data = """A,B,C
1,2,3
@@ -107,7 +107,7 @@ def test_skip_footer(self):
want to skip this
also also skip this
"""
- result = self.read_csv(StringIO(data), skip_footer=2)
+ result = self.read_csv(StringIO(data), skipfooter=2)
no_footer = '\n'.join(data.split('\n')[:-3])
expected = self.read_csv(StringIO(no_footer))
tm.assert_frame_equal(result, expected)
diff --git a/pandas/io/tests/parser/test_unsupported.py b/pandas/io/tests/parser/test_unsupported.py
index c8ad46af10795..ef8f7967193ff 100644
--- a/pandas/io/tests/parser/test_unsupported.py
+++ b/pandas/io/tests/parser/test_unsupported.py
@@ -52,7 +52,7 @@ def test_c_engine(self):
with tm.assertRaisesRegexp(ValueError, msg):
read_table(StringIO(data), sep='\s', dtype={'a': float})
with tm.assertRaisesRegexp(ValueError, msg):
- read_table(StringIO(data), skip_footer=1, dtype={'a': float})
+ read_table(StringIO(data), skipfooter=1, dtype={'a': float})
# specify C engine with unsupported options (raise)
with tm.assertRaisesRegexp(ValueError, msg):
@@ -61,7 +61,7 @@ def test_c_engine(self):
with tm.assertRaisesRegexp(ValueError, msg):
read_table(StringIO(data), engine='c', sep='\s')
with tm.assertRaisesRegexp(ValueError, msg):
- read_table(StringIO(data), engine='c', skip_footer=1)
+ read_table(StringIO(data), engine='c', skipfooter=1)
# specify C-unsupported options without python-unsupported options
with tm.assert_produces_warning(parsers.ParserWarning):
@@ -69,7 +69,7 @@ def test_c_engine(self):
with tm.assert_produces_warning(parsers.ParserWarning):
read_table(StringIO(data), sep='\s')
with tm.assert_produces_warning(parsers.ParserWarning):
- read_table(StringIO(data), skip_footer=1)
+ read_table(StringIO(data), skipfooter=1)
text = """ A B C D E
one two three four
@@ -127,6 +127,7 @@ def test_deprecated_args(self):
'as_recarray': True,
'buffer_lines': True,
'compact_ints': True,
+ 'skip_footer': True,
'use_unsigned': True,
}
@@ -134,8 +135,12 @@ def test_deprecated_args(self):
for engine in engines:
for arg, non_default_val in deprecated.items():
+ if engine == 'c' and arg == 'skip_footer':
+ # unsupported --> exception is raised
+ continue
+
if engine == 'python' and arg == 'buffer_lines':
- # unsupported --> exception is raised first
+ # unsupported --> exception is raised
continue
with tm.assert_produces_warning(
diff --git a/pandas/parser.pyx b/pandas/parser.pyx
index b5d1c8b7acf2c..e72e2f90a5213 100644
--- a/pandas/parser.pyx
+++ b/pandas/parser.pyx
@@ -165,7 +165,7 @@ cdef extern from "parser/tokenizer.h":
void *skipset
int64_t skip_first_N_rows
- int skip_footer
+ int skipfooter
double (*converter)(const char *, char **, char, char, char, int) nogil
# error handling
@@ -270,7 +270,7 @@ cdef class TextReader:
kh_str_t *true_set
cdef public:
- int leading_cols, table_width, skip_footer, buffer_lines
+ int leading_cols, table_width, skipfooter, buffer_lines
object allow_leading_cols
object delimiter, converters, delim_whitespace
object na_values
@@ -338,7 +338,7 @@ cdef class TextReader:
low_memory=False,
buffer_lines=None,
skiprows=None,
- skip_footer=0,
+ skipfooter=0,
verbose=False,
mangle_dupe_cols=True,
tupleize_cols=False,
@@ -418,7 +418,7 @@ cdef class TextReader:
if skiprows is not None:
self._make_skiprow_set()
- self.skip_footer = skip_footer
+ self.skipfooter = skipfooter
# suboptimal
if usecols is not None:
@@ -426,7 +426,7 @@ cdef class TextReader:
self.usecols = set(usecols)
# XXX
- if skip_footer > 0:
+ if skipfooter > 0:
self.parser.error_bad_lines = 0
self.parser.warn_bad_lines = 0
@@ -912,8 +912,8 @@ cdef class TextReader:
if buffered_lines < irows:
self._tokenize_rows(irows - buffered_lines)
- if self.skip_footer > 0:
- raise ValueError('skip_footer can only be used to read '
+ if self.skipfooter > 0:
+ raise ValueError('skipfooter can only be used to read '
'the whole file')
else:
with nogil:
@@ -926,7 +926,7 @@ cdef class TextReader:
if status < 0:
raise_parser_error('Error tokenizing data', self.parser)
- footer = self.skip_footer
+ footer = self.skipfooter
if self.parser_start == self.parser.lines:
raise StopIteration
| Title is self-explanatory.
Closes gh-13349 and partially undoes this <a href="https://github.com/pydata/pandas/commit/9ceea2fd46e79f37269a32de8c140cddf90eda13">commit</a> back in `v0.9.0`. With such a massive API now, having duplicate arguments makes managing it way less practical.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13386 | 2016-06-07T15:09:41Z | 2016-07-29T00:25:25Z | null | 2016-07-29T02:29:54Z |
DOC, ENH: Support memory_map for Python engine | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 6802a448c4e14..61625104f5c1d 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -198,6 +198,10 @@ use_unsigned : boolean, default False
If integer columns are being compacted (i.e. ``compact_ints=True``), specify whether
the column should be compacted to the smallest signed or unsigned integer dtype.
+memory_map : boolean, default False
+ If a filepath is provided for ``filepath_or_buffer``, map the file object
+ directly onto memory and access the data directly from there. Using this
+ option can improve performance because there is no longer any I/O overhead.
NA and Missing Data Handling
++++++++++++++++++++++++++++
diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 1e95af2df247b..5aee616241406 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -76,6 +76,7 @@ Other enhancements
- The ``pd.read_csv()`` with ``engine='python'`` has gained support for the ``decimal`` option (:issue:`12933`)
- The ``pd.read_csv()`` with ``engine='python'`` has gained support for the ``na_filter`` option (:issue:`13321`)
+- The ``pd.read_csv()`` with ``engine='python'`` has gained support for the ``memory_map`` option (:issue:`13381`)
- ``Index.astype()`` now accepts an optional boolean argument ``copy``, which allows optional copying if the requirements on dtype are satisfied (:issue:`13209`)
- ``Index`` now supports the ``.where()`` function for same shape indexing (:issue:`13170`)
diff --git a/pandas/io/common.py b/pandas/io/common.py
index cf4bba6e97afb..76395928eb011 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -4,6 +4,7 @@
import os
import csv
import codecs
+import mmap
import zipfile
from contextlib import contextmanager, closing
@@ -276,7 +277,7 @@ def ZipFile(*args, **kwargs):
ZipFile = zipfile.ZipFile
-def _get_handle(path, mode, encoding=None, compression=None):
+def _get_handle(path, mode, encoding=None, compression=None, memory_map=False):
"""Gets file handle for given path and mode.
"""
if compression is not None:
@@ -324,9 +325,55 @@ def _get_handle(path, mode, encoding=None, compression=None):
else:
f = open(path, mode)
+ if memory_map and hasattr(f, 'fileno'):
+ try:
+ f = MMapWrapper(f)
+ except Exception:
+ # we catch any errors that may have occurred
+ # because that is consistent with the lower-level
+ # functionality of the C engine (pd.read_csv), so
+ # leave the file handler as is then
+ pass
+
return f
+class MMapWrapper(BaseIterator):
+ """
+ Wrapper for the Python's mmap class so that it can be properly read in
+ by Python's csv.reader class.
+
+ Parameters
+ ----------
+ f : file object
+ File object to be mapped onto memory. Must support the 'fileno'
+ method or have an equivalent attribute
+
+ """
+
+ def __init__(self, f):
+ self.mmap = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
+
+ def __getattr__(self, name):
+ return getattr(self.mmap, name)
+
+ def __next__(self):
+ newline = self.mmap.readline()
+
+ # readline returns bytes, not str, in Python 3,
+ # but Python's CSV reader expects str, so convert
+ # the output to str before continuing
+ if compat.PY3:
+ newline = compat.bytes_to_str(newline)
+
+ # mmap doesn't raise if reading past the allocated
+ # data but instead returns an empty string, so raise
+ # if that is returned
+ if newline == '':
+ raise StopIteration
+ return newline
+
+
class UTF8Recoder(BaseIterator):
"""
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 0f0e1848750c0..4e954979f7d08 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -261,6 +261,10 @@
If integer columns are being compacted (i.e. `compact_ints=True`), specify
whether the column should be compacted to the smallest signed or unsigned
integer dtype.
+memory_map : boolean, default False
+ If a filepath is provided for `filepath_or_buffer`, map the file object
+ directly onto memory and access the data directly from there. Using this
+ option can improve performance because there is no longer any I/O overhead.
Returns
-------
@@ -459,7 +463,6 @@ def _read(filepath_or_buffer, kwds):
_c_unsupported = set(['skip_footer'])
_python_unsupported = set([
'low_memory',
- 'memory_map',
'buffer_lines',
'error_bad_lines',
'warn_bad_lines',
@@ -1683,6 +1686,7 @@ def __init__(self, f, **kwds):
self.encoding = kwds['encoding']
self.compression = kwds['compression']
+ self.memory_map = kwds['memory_map']
self.skiprows = kwds['skiprows']
self.skip_footer = kwds['skip_footer']
@@ -1718,7 +1722,8 @@ def __init__(self, f, **kwds):
if isinstance(f, compat.string_types):
f = _get_handle(f, 'r', encoding=self.encoding,
- compression=self.compression)
+ compression=self.compression,
+ memory_map=self.memory_map)
elif self.compression:
f = _wrap_compressed(f, self.compression, self.encoding)
# in Python 3, convert BytesIO or fileobjects passed with an encoding
diff --git a/pandas/io/tests/data/test_mmap.csv b/pandas/io/tests/data/test_mmap.csv
new file mode 100644
index 0000000000000..cc2cd7c30349b
--- /dev/null
+++ b/pandas/io/tests/data/test_mmap.csv
@@ -0,0 +1,5 @@
+a,b,c
+1,one,I
+2,two,II
+
+3,three,III
diff --git a/pandas/io/tests/parser/c_parser_only.py b/pandas/io/tests/parser/c_parser_only.py
index 90103064774c1..b6048051edc4d 100644
--- a/pandas/io/tests/parser/c_parser_only.py
+++ b/pandas/io/tests/parser/c_parser_only.py
@@ -285,10 +285,6 @@ def test_usecols_dtypes(self):
self.assertTrue((result.dtypes == [object, np.int, np.float]).all())
self.assertTrue((result2.dtypes == [object, np.float]).all())
- def test_memory_map(self):
- # it works!
- self.read_csv(self.csv1, memory_map=True)
-
def test_disable_bool_parsing(self):
# #2090
diff --git a/pandas/io/tests/parser/common.py b/pandas/io/tests/parser/common.py
index fdaac71f59386..670f3df6f3984 100644
--- a/pandas/io/tests/parser/common.py
+++ b/pandas/io/tests/parser/common.py
@@ -1458,3 +1458,14 @@ def test_as_recarray(self):
out = self.read_csv(StringIO(data), as_recarray=True,
usecols=['a'])
tm.assert_numpy_array_equal(out, expected)
+
+ def test_memory_map(self):
+ mmap_file = os.path.join(self.dirpath, 'test_mmap.csv')
+ expected = DataFrame({
+ 'a': [1, 2, 3],
+ 'b': ['one', 'two', 'three'],
+ 'c': ['I', 'II', 'III']
+ })
+
+ out = self.read_csv(mmap_file, memory_map=True)
+ tm.assert_frame_equal(out, expected)
diff --git a/pandas/io/tests/parser/data/test_mmap.csv b/pandas/io/tests/parser/data/test_mmap.csv
new file mode 100644
index 0000000000000..2885fc2bfbd69
--- /dev/null
+++ b/pandas/io/tests/parser/data/test_mmap.csv
@@ -0,0 +1,4 @@
+a,b,c
+1,one,I
+2,two,II
+3,three,III
diff --git a/pandas/io/tests/test_common.py b/pandas/io/tests/test_common.py
index 8615b75d87626..b70fca3ed2d20 100644
--- a/pandas/io/tests/test_common.py
+++ b/pandas/io/tests/test_common.py
@@ -2,6 +2,7 @@
Tests for the pandas.io.common functionalities
"""
from pandas.compat import StringIO
+import mmap
import os
from os.path import isabs
@@ -87,3 +88,49 @@ def test_iterator(self):
tm.assert_frame_equal(first, expected.iloc[[0]])
expected.index = [0 for i in range(len(expected))]
tm.assert_frame_equal(concat(it), expected.iloc[1:])
+
+
+class TestMMapWrapper(tm.TestCase):
+
+ def setUp(self):
+ self.mmap_file = os.path.join(tm.get_data_path(),
+ 'test_mmap.csv')
+
+ def test_constructor_bad_file(self):
+ non_file = StringIO('I am not a file')
+ non_file.fileno = lambda: -1
+
+ msg = "Invalid argument"
+ tm.assertRaisesRegexp(mmap.error, msg, common.MMapWrapper, non_file)
+
+ target = open(self.mmap_file, 'r')
+ target.close()
+
+ msg = "I/O operation on closed file"
+ tm.assertRaisesRegexp(ValueError, msg, common.MMapWrapper, target)
+
+ def test_get_attr(self):
+ target = open(self.mmap_file, 'r')
+ wrapper = common.MMapWrapper(target)
+
+ attrs = dir(wrapper.mmap)
+ attrs = [attr for attr in attrs
+ if not attr.startswith('__')]
+ attrs.append('__next__')
+
+ for attr in attrs:
+ self.assertTrue(hasattr(wrapper, attr))
+
+ self.assertFalse(hasattr(wrapper, 'foo'))
+
+ def test_next(self):
+ target = open(self.mmap_file, 'r')
+ wrapper = common.MMapWrapper(target)
+
+ lines = target.readlines()
+
+ for line in lines:
+ next_line = next(wrapper)
+ self.assertEqual(next_line, line)
+
+ self.assertRaises(StopIteration, next, wrapper)
| Title is self-explanatory.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13381 | 2016-06-06T14:21:05Z | 2016-06-08T11:24:04Z | null | 2016-06-08T11:24:41Z |
CLN: extract window functions from algox.pyx and create window.pyx | diff --git a/pandas/algos.pyx b/pandas/algos.pyx
index 7884d9c41845c..f1fd0204e2fd2 100644
--- a/pandas/algos.pyx
+++ b/pandas/algos.pyx
@@ -31,38 +31,17 @@ float16 = np.dtype(np.float16)
float32 = np.dtype(np.float32)
float64 = np.dtype(np.float64)
-cdef np.int8_t MINint8 = np.iinfo(np.int8).min
-cdef np.int16_t MINint16 = np.iinfo(np.int16).min
-cdef np.int32_t MINint32 = np.iinfo(np.int32).min
-cdef np.int64_t MINint64 = np.iinfo(np.int64).min
-cdef np.float16_t MINfloat16 = np.NINF
-cdef np.float32_t MINfloat32 = np.NINF
-cdef np.float64_t MINfloat64 = np.NINF
-
-cdef np.int8_t MAXint8 = np.iinfo(np.int8).max
-cdef np.int16_t MAXint16 = np.iinfo(np.int16).max
-cdef np.int32_t MAXint32 = np.iinfo(np.int32).max
-cdef np.int64_t MAXint64 = np.iinfo(np.int64).max
-cdef np.float16_t MAXfloat16 = np.inf
-cdef np.float32_t MAXfloat32 = np.inf
-cdef np.float64_t MAXfloat64 = np.inf
-
cdef double NaN = <double> np.NaN
cdef double nan = NaN
-cdef inline int int_max(int a, int b): return a if a >= b else b
-cdef inline int int_min(int a, int b): return a if a <= b else b
-
-
cdef extern from "src/headers/math.h":
double sqrt(double x) nogil
double fabs(double) nogil
- int signbit(double) nogil
-from pandas import lib
-
-include "skiplist.pyx"
+# this is our util.pxd
+from util cimport numeric
+from pandas import lib
cdef:
int TIEBREAK_AVERAGE = 0
@@ -720,57 +699,6 @@ def rank_2d_generic(object in_arr, axis=0, ties_method='average',
# return result
-# Cython implementations of rolling sum, mean, variance, skewness,
-# other statistical moment functions
-#
-# Misc implementation notes
-# -------------------------
-#
-# - In Cython x * x is faster than x ** 2 for C types, this should be
-# periodically revisited to see if it's still true.
-#
-# -
-
-def _check_minp(win, minp, N, floor=1):
- if minp > win:
- raise ValueError('min_periods (%d) must be <= window (%d)'
- % (minp, win))
- elif minp > N:
- minp = N + 1
- elif minp < 0:
- raise ValueError('min_periods must be >= 0')
- return max(minp, floor)
-
-# original C implementation by N. Devillard.
-# This code in public domain.
-# Function : kth_smallest()
-# In : array of elements, # of elements in the array, rank k
-# Out : one element
-# Job : find the kth smallest element in the array
-
-# Reference:
-
-# Author: Wirth, Niklaus
-# Title: Algorithms + data structures = programs
-# Publisher: Englewood Cliffs: Prentice-Hall, 1976
-# Physical description: 366 p.
-# Series: Prentice-Hall Series in Automatic Computation
-
-
-ctypedef fused numeric:
- int8_t
- int16_t
- int32_t
- int64_t
-
- uint8_t
- uint16_t
- uint32_t
- uint64_t
-
- float32_t
- float64_t
-
cdef inline Py_ssize_t swap(numeric *a, numeric *b) nogil except -1:
cdef numeric t
@@ -894,263 +822,6 @@ def min_subseq(ndarray[double_t] arr):
return (s, e, -m)
-#-------------------------------------------------------------------------------
-# Rolling sum
[email protected](False)
[email protected](False)
-def roll_sum(ndarray[double_t] input, int win, int minp):
- cdef double val, prev, sum_x = 0
- cdef int nobs = 0, i
- cdef int N = len(input)
-
- cdef ndarray[double_t] output = np.empty(N, dtype=float)
-
- minp = _check_minp(win, minp, N)
- with nogil:
- for i from 0 <= i < minp - 1:
- val = input[i]
-
- # Not NaN
- if val == val:
- nobs += 1
- sum_x += val
-
- output[i] = NaN
-
- for i from minp - 1 <= i < N:
- val = input[i]
-
- if val == val:
- nobs += 1
- sum_x += val
-
- if i > win - 1:
- prev = input[i - win]
- if prev == prev:
- sum_x -= prev
- nobs -= 1
-
- if nobs >= minp:
- output[i] = sum_x
- else:
- output[i] = NaN
-
- return output
-
-#-------------------------------------------------------------------------------
-# Rolling mean
[email protected](False)
[email protected](False)
-def roll_mean(ndarray[double_t] input,
- int win, int minp):
- cdef:
- double val, prev, result, sum_x = 0
- Py_ssize_t nobs = 0, i, neg_ct = 0
- Py_ssize_t N = len(input)
-
- cdef ndarray[double_t] output = np.empty(N, dtype=float)
- minp = _check_minp(win, minp, N)
- with nogil:
- for i from 0 <= i < minp - 1:
- val = input[i]
-
- # Not NaN
- if val == val:
- nobs += 1
- sum_x += val
- if signbit(val):
- neg_ct += 1
-
- output[i] = NaN
-
- for i from minp - 1 <= i < N:
- val = input[i]
-
- if val == val:
- nobs += 1
- sum_x += val
- if signbit(val):
- neg_ct += 1
-
- if i > win - 1:
- prev = input[i - win]
- if prev == prev:
- sum_x -= prev
- nobs -= 1
- if signbit(prev):
- neg_ct -= 1
-
- if nobs >= minp:
- result = sum_x / nobs
- if neg_ct == 0 and result < 0:
- # all positive
- output[i] = 0
- elif neg_ct == nobs and result > 0:
- # all negative
- output[i] = 0
- else:
- output[i] = result
- else:
- output[i] = NaN
-
- return output
-
-#-------------------------------------------------------------------------------
-# Exponentially weighted moving average
-
-def ewma(ndarray[double_t] input, double_t com, int adjust, int ignore_na, int minp):
- """
- Compute exponentially-weighted moving average using center-of-mass.
-
- Parameters
- ----------
- input : ndarray (float64 type)
- com : float64
- adjust: int
- ignore_na: int
- minp: int
-
- Returns
- -------
- y : ndarray
- """
-
- cdef Py_ssize_t N = len(input)
- cdef ndarray[double_t] output = np.empty(N, dtype=float)
- if N == 0:
- return output
-
- minp = max(minp, 1)
-
- cdef double alpha, old_wt_factor, new_wt, weighted_avg, old_wt, cur
- cdef Py_ssize_t i, nobs
-
- alpha = 1. / (1. + com)
- old_wt_factor = 1. - alpha
- new_wt = 1. if adjust else alpha
-
- weighted_avg = input[0]
- is_observation = (weighted_avg == weighted_avg)
- nobs = int(is_observation)
- output[0] = weighted_avg if (nobs >= minp) else NaN
- old_wt = 1.
-
- for i from 1 <= i < N:
- cur = input[i]
- is_observation = (cur == cur)
- nobs += int(is_observation)
- if weighted_avg == weighted_avg:
- if is_observation or (not ignore_na):
- old_wt *= old_wt_factor
- if is_observation:
- if weighted_avg != cur: # avoid numerical errors on constant series
- weighted_avg = ((old_wt * weighted_avg) + (new_wt * cur)) / (old_wt + new_wt)
- if adjust:
- old_wt += new_wt
- else:
- old_wt = 1.
- elif is_observation:
- weighted_avg = cur
-
- output[i] = weighted_avg if (nobs >= minp) else NaN
-
- return output
-
-#-------------------------------------------------------------------------------
-# Exponentially weighted moving covariance
-
-def ewmcov(ndarray[double_t] input_x, ndarray[double_t] input_y,
- double_t com, int adjust, int ignore_na, int minp, int bias):
- """
- Compute exponentially-weighted moving variance using center-of-mass.
-
- Parameters
- ----------
- input_x : ndarray (float64 type)
- input_y : ndarray (float64 type)
- com : float64
- adjust: int
- ignore_na: int
- minp: int
- bias: int
-
- Returns
- -------
- y : ndarray
- """
-
- cdef Py_ssize_t N = len(input_x)
- if len(input_y) != N:
- raise ValueError('arrays are of different lengths (%d and %d)' % (N, len(input_y)))
- cdef ndarray[double_t] output = np.empty(N, dtype=float)
- if N == 0:
- return output
-
- minp = max(minp, 1)
-
- cdef double alpha, old_wt_factor, new_wt, mean_x, mean_y, cov
- cdef double sum_wt, sum_wt2, old_wt, cur_x, cur_y, old_mean_x, old_mean_y
- cdef Py_ssize_t i, nobs
-
- alpha = 1. / (1. + com)
- old_wt_factor = 1. - alpha
- new_wt = 1. if adjust else alpha
-
- mean_x = input_x[0]
- mean_y = input_y[0]
- is_observation = ((mean_x == mean_x) and (mean_y == mean_y))
- nobs = int(is_observation)
- if not is_observation:
- mean_x = NaN
- mean_y = NaN
- output[0] = (0. if bias else NaN) if (nobs >= minp) else NaN
- cov = 0.
- sum_wt = 1.
- sum_wt2 = 1.
- old_wt = 1.
-
- for i from 1 <= i < N:
- cur_x = input_x[i]
- cur_y = input_y[i]
- is_observation = ((cur_x == cur_x) and (cur_y == cur_y))
- nobs += int(is_observation)
- if mean_x == mean_x:
- if is_observation or (not ignore_na):
- sum_wt *= old_wt_factor
- sum_wt2 *= (old_wt_factor * old_wt_factor)
- old_wt *= old_wt_factor
- if is_observation:
- old_mean_x = mean_x
- old_mean_y = mean_y
- if mean_x != cur_x: # avoid numerical errors on constant series
- mean_x = ((old_wt * old_mean_x) + (new_wt * cur_x)) / (old_wt + new_wt)
- if mean_y != cur_y: # avoid numerical errors on constant series
- mean_y = ((old_wt * old_mean_y) + (new_wt * cur_y)) / (old_wt + new_wt)
- cov = ((old_wt * (cov + ((old_mean_x - mean_x) * (old_mean_y - mean_y)))) +
- (new_wt * ((cur_x - mean_x) * (cur_y - mean_y)))) / (old_wt + new_wt)
- sum_wt += new_wt
- sum_wt2 += (new_wt * new_wt)
- old_wt += new_wt
- if not adjust:
- sum_wt /= old_wt
- sum_wt2 /= (old_wt * old_wt)
- old_wt = 1.
- elif is_observation:
- mean_x = cur_x
- mean_y = cur_y
-
- if nobs >= minp:
- if not bias:
- numerator = sum_wt * sum_wt
- denominator = numerator - sum_wt2
- output[i] = ((numerator / denominator) * cov) if (denominator > 0.) else NaN
- else:
- output[i] = cov
- else:
- output[i] = NaN
-
- return output
-
#----------------------------------------------------------------------
# Pairwise correlation/covariance
@@ -1273,613 +944,9 @@ def nancorr_spearman(ndarray[float64_t, ndim=2] mat, Py_ssize_t minp=1):
return result
-#----------------------------------------------------------------------
-# Rolling variance
-
[email protected](False)
[email protected](False)
-def roll_var(ndarray[double_t] input, int win, int minp, int ddof=1):
- """
- Numerically stable implementation using Welford's method.
- """
- cdef double val, prev, mean_x = 0, ssqdm_x = 0, nobs = 0, delta
- cdef Py_ssize_t i
- cdef Py_ssize_t N = len(input)
-
- cdef ndarray[double_t] output = np.empty(N, dtype=float)
-
- minp = _check_minp(win, minp, N)
-
- # Check for windows larger than array, addresses #7297
- win = min(win, N)
-
- with nogil:
- # Over the first window, observations can only be added, never removed
- for i from 0 <= i < win:
- val = input[i]
-
- # Not NaN
- if val == val:
- nobs += 1
- delta = (val - mean_x)
- mean_x += delta / nobs
- ssqdm_x += delta * (val - mean_x)
-
- if (nobs >= minp) and (nobs > ddof):
- #pathological case
- if nobs == 1:
- val = 0
- else:
- val = ssqdm_x / (nobs - ddof)
- if val < 0:
- val = 0
- else:
- val = NaN
-
- output[i] = val
-
- # After the first window, observations can both be added and removed
- for i from win <= i < N:
- val = input[i]
- prev = input[i - win]
-
- if val == val:
- if prev == prev:
- # Adding one observation and removing another one
- delta = val - prev
- prev -= mean_x
- mean_x += delta / nobs
- val -= mean_x
- ssqdm_x += (val + prev) * delta
- else:
- # Adding one observation and not removing any
- nobs += 1
- delta = (val - mean_x)
- mean_x += delta / nobs
- ssqdm_x += delta * (val - mean_x)
- elif prev == prev:
- # Adding no new observation, but removing one
- nobs -= 1
- if nobs:
- delta = (prev - mean_x)
- mean_x -= delta / nobs
- ssqdm_x -= delta * (prev - mean_x)
- else:
- mean_x = 0
- ssqdm_x = 0
- # Variance is unchanged if no observation is added or removed
-
- if (nobs >= minp) and (nobs > ddof):
- #pathological case
- if nobs == 1:
- val = 0
- else:
- val = ssqdm_x / (nobs - ddof)
- if val < 0:
- val = 0
- else:
- val = NaN
-
- output[i] = val
-
- return output
-
-
-#-------------------------------------------------------------------------------
-# Rolling skewness
[email protected](False)
[email protected](False)
-def roll_skew(ndarray[double_t] input, int win, int minp):
- cdef double val, prev
- cdef double x = 0, xx = 0, xxx = 0
- cdef Py_ssize_t nobs = 0, i
- cdef Py_ssize_t N = len(input)
-
- cdef ndarray[double_t] output = np.empty(N, dtype=float)
-
- # 3 components of the skewness equation
- cdef double A, B, C, R
-
- minp = _check_minp(win, minp, N)
- with nogil:
- for i from 0 <= i < minp - 1:
- val = input[i]
-
- # Not NaN
- if val == val:
- nobs += 1
- x += val
- xx += val * val
- xxx += val * val * val
-
- output[i] = NaN
-
- for i from minp - 1 <= i < N:
- val = input[i]
-
- if val == val:
- nobs += 1
- x += val
- xx += val * val
- xxx += val * val * val
-
- if i > win - 1:
- prev = input[i - win]
- if prev == prev:
- x -= prev
- xx -= prev * prev
- xxx -= prev * prev * prev
-
- nobs -= 1
- if nobs >= minp:
- A = x / nobs
- B = xx / nobs - A * A
- C = xxx / nobs - A * A * A - 3 * A * B
- if B <= 0 or nobs < 3:
- output[i] = NaN
- else:
- R = sqrt(B)
- output[i] = ((sqrt(nobs * (nobs - 1.)) * C) /
- ((nobs-2) * R * R * R))
- else:
- output[i] = NaN
-
- return output
-
-#-------------------------------------------------------------------------------
-# Rolling kurtosis
[email protected](False)
[email protected](False)
-def roll_kurt(ndarray[double_t] input,
- int win, int minp):
- cdef double val, prev
- cdef double x = 0, xx = 0, xxx = 0, xxxx = 0
- cdef Py_ssize_t nobs = 0, i
- cdef Py_ssize_t N = len(input)
-
- cdef ndarray[double_t] output = np.empty(N, dtype=float)
-
- # 5 components of the kurtosis equation
- cdef double A, B, C, D, R, K
-
- minp = _check_minp(win, minp, N)
- with nogil:
- for i from 0 <= i < minp - 1:
- val = input[i]
-
- # Not NaN
- if val == val:
- nobs += 1
-
- # seriously don't ask me why this is faster
- x += val
- xx += val * val
- xxx += val * val * val
- xxxx += val * val * val * val
-
- output[i] = NaN
-
- for i from minp - 1 <= i < N:
- val = input[i]
-
- if val == val:
- nobs += 1
- x += val
- xx += val * val
- xxx += val * val * val
- xxxx += val * val * val * val
-
- if i > win - 1:
- prev = input[i - win]
- if prev == prev:
- x -= prev
- xx -= prev * prev
- xxx -= prev * prev * prev
- xxxx -= prev * prev * prev * prev
-
- nobs -= 1
-
- if nobs >= minp:
- A = x / nobs
- R = A * A
- B = xx / nobs - R
- R = R * A
- C = xxx / nobs - R - 3 * A * B
- R = R * A
- D = xxxx / nobs - R - 6*B*A*A - 4*C*A
-
- if B == 0 or nobs < 4:
- output[i] = NaN
-
- else:
- K = (nobs * nobs - 1.)*D/(B*B) - 3*((nobs-1.)**2)
- K = K / ((nobs - 2.)*(nobs-3.))
-
- output[i] = K
-
- else:
- output[i] = NaN
-
- return output
-
-#-------------------------------------------------------------------------------
-# Rolling median, min, max
-
-from skiplist cimport *
-
[email protected](False)
[email protected](False)
-def roll_median_c(ndarray[float64_t] arg, int win, int minp):
- cdef:
- double val, res, prev
- bint err=0
- int ret=0
- skiplist_t *sl
- Py_ssize_t midpoint, nobs = 0, i
-
-
- cdef Py_ssize_t N = len(arg)
- cdef ndarray[double_t] output = np.empty(N, dtype=float)
-
- sl = skiplist_init(win)
- if sl == NULL:
- raise MemoryError("skiplist_init failed")
-
- minp = _check_minp(win, minp, N)
-
- with nogil:
- for i from 0 <= i < minp - 1:
- val = arg[i]
-
- # Not NaN
- if val == val:
- nobs += 1
- err = skiplist_insert(sl, val) != 1
- if err:
- break
- output[i] = NaN
-
- with nogil:
- if not err:
- for i from minp - 1 <= i < N:
-
- val = arg[i]
-
- if i > win - 1:
- prev = arg[i - win]
-
- if prev == prev:
- skiplist_remove(sl, prev)
- nobs -= 1
-
- if val == val:
- nobs += 1
- err = skiplist_insert(sl, val) != 1
- if err:
- break
-
- if nobs >= minp:
- midpoint = nobs / 2
- if nobs % 2:
- res = skiplist_get(sl, midpoint, &ret)
- else:
- res = (skiplist_get(sl, midpoint, &ret) +
- skiplist_get(sl, (midpoint - 1), &ret)) / 2
- else:
- res = NaN
-
- output[i] = res
-
- skiplist_destroy(sl)
- if err:
- raise MemoryError("skiplist_insert failed")
- return output
-
-#----------------------------------------------------------------------
-
-# Moving maximum / minimum code taken from Bottleneck under the terms
-# of its Simplified BSD license
-# https://github.com/kwgoodman/bottleneck
-
-from libc cimport stdlib
-
[email protected](False)
[email protected](False)
-def roll_max(ndarray[numeric] a, int window, int minp):
- """
- Moving max of 1d array of any numeric type along axis=0 ignoring NaNs.
-
- Parameters
- ----------
- a: numpy array
- window: int, size of rolling window
- minp: if number of observations in window
- is below this, output a NaN
- """
- return _roll_min_max(a, window, minp, 1)
-
[email protected](False)
[email protected](False)
-def roll_min(ndarray[numeric] a, int window, int minp):
- """
- Moving max of 1d array of any numeric type along axis=0 ignoring NaNs.
-
- Parameters
- ----------
- a: numpy array
- window: int, size of rolling window
- minp: if number of observations in window
- is below this, output a NaN
- """
- return _roll_min_max(a, window, minp, 0)
-
[email protected](False)
[email protected](False)
-cdef _roll_min_max(ndarray[numeric] a, int window, int minp, bint is_max):
- "Moving min/max of 1d array of any numeric type along axis=0 ignoring NaNs."
- cdef numeric ai, aold
- cdef Py_ssize_t count
- cdef Py_ssize_t* death
- cdef numeric* ring
- cdef numeric* minvalue
- cdef numeric* end
- cdef numeric* last
- cdef Py_ssize_t i0
- cdef np.npy_intp *dim
- dim = PyArray_DIMS(a)
- cdef Py_ssize_t n0 = dim[0]
- cdef np.npy_intp *dims = [n0]
- cdef bint should_replace
- cdef np.ndarray[numeric, ndim=1] y = PyArray_EMPTY(1, dims, PyArray_TYPE(a), 0)
-
- if window < 1:
- raise ValueError('Invalid window size %d'
- % (window))
-
- if minp > window:
- raise ValueError('Invalid min_periods size %d greater than window %d'
- % (minp, window))
-
- minp = _check_minp(window, minp, n0)
- with nogil:
- ring = <numeric*>stdlib.malloc(window * sizeof(numeric))
- death = <Py_ssize_t*>stdlib.malloc(window * sizeof(Py_ssize_t))
- end = ring + window
- last = ring
-
- minvalue = ring
- ai = a[0]
- if numeric in cython.floating:
- if ai == ai:
- minvalue[0] = ai
- elif is_max:
- minvalue[0] = MINfloat64
- else:
- minvalue[0] = MAXfloat64
- else:
- minvalue[0] = ai
- death[0] = window
-
- count = 0
- for i0 in range(n0):
- ai = a[i0]
- if numeric in cython.floating:
- if ai == ai:
- count += 1
- elif is_max:
- ai = MINfloat64
- else:
- ai = MAXfloat64
- else:
- count += 1
- if i0 >= window:
- aold = a[i0 - window]
- if aold == aold:
- count -= 1
- if death[minvalue-ring] == i0:
- minvalue += 1
- if minvalue >= end:
- minvalue = ring
- should_replace = ai >= minvalue[0] if is_max else ai <= minvalue[0]
- if should_replace:
- minvalue[0] = ai
- death[minvalue-ring] = i0 + window
- last = minvalue
- else:
- should_replace = last[0] <= ai if is_max else last[0] >= ai
- while should_replace:
- if last == ring:
- last = end
- last -= 1
- should_replace = last[0] <= ai if is_max else last[0] >= ai
- last += 1
- if last == end:
- last = ring
- last[0] = ai
- death[last - ring] = i0 + window
- if numeric in cython.floating:
- if count >= minp:
- y[i0] = minvalue[0]
- else:
- y[i0] = NaN
- else:
- y[i0] = minvalue[0]
-
- for i0 in range(minp - 1):
- if numeric in cython.floating:
- y[i0] = NaN
- else:
- y[i0] = 0
-
- stdlib.free(ring)
- stdlib.free(death)
- return y
-
-def roll_quantile(ndarray[float64_t, cast=True] input, int win,
- int minp, double quantile):
- """
- O(N log(window)) implementation using skip list
- """
- cdef double val, prev, midpoint
- cdef IndexableSkiplist skiplist
- cdef Py_ssize_t nobs = 0, i
- cdef Py_ssize_t N = len(input)
- cdef ndarray[double_t] output = np.empty(N, dtype=float)
-
- skiplist = IndexableSkiplist(win)
-
- minp = _check_minp(win, minp, N)
-
- for i from 0 <= i < minp - 1:
- val = input[i]
-
- # Not NaN
- if val == val:
- nobs += 1
- skiplist.insert(val)
-
- output[i] = NaN
-
- for i from minp - 1 <= i < N:
- val = input[i]
-
- if i > win - 1:
- prev = input[i - win]
-
- if prev == prev:
- skiplist.remove(prev)
- nobs -= 1
-
- if val == val:
- nobs += 1
- skiplist.insert(val)
-
- if nobs >= minp:
- idx = int((quantile / 1.) * (nobs - 1))
- output[i] = skiplist.get(idx)
- else:
- output[i] = NaN
-
- return output
-
-def roll_generic(ndarray[float64_t, cast=True] input,
- int win, int minp, int offset,
- object func, object args, object kwargs):
- cdef ndarray[double_t] output, counts, bufarr
- cdef Py_ssize_t i, n
- cdef float64_t *buf
- cdef float64_t *oldbuf
-
- if not input.flags.c_contiguous:
- input = input.copy('C')
-
- n = len(input)
- if n == 0:
- return input
-
- minp = _check_minp(win, minp, n, floor=0)
- output = np.empty(n, dtype=float)
- counts = roll_sum(np.concatenate((np.isfinite(input).astype(float), np.array([0.] * offset))), win, minp)[offset:]
-
- # truncated windows at the beginning, through first full-length window
- for i from 0 <= i < (int_min(win, n) - offset):
- if counts[i] >= minp:
- output[i] = func(input[0 : (i + offset + 1)], *args, **kwargs)
- else:
- output[i] = NaN
-
- # remaining full-length windows
- buf = <float64_t*> input.data
- bufarr = np.empty(win, dtype=float)
- oldbuf = <float64_t*> bufarr.data
- for i from (win - offset) <= i < (n - offset):
- buf = buf + 1
- bufarr.data = <char*> buf
- if counts[i] >= minp:
- output[i] = func(bufarr, *args, **kwargs)
- else:
- output[i] = NaN
- bufarr.data = <char*> oldbuf
-
- # truncated windows at the end
- for i from int_max(n - offset, 0) <= i < n:
- if counts[i] >= minp:
- output[i] = func(input[int_max(i + offset - win + 1, 0) : n], *args, **kwargs)
- else:
- output[i] = NaN
-
- return output
-
-
-def roll_window(ndarray[float64_t, ndim=1, cast=True] input,
- ndarray[float64_t, ndim=1, cast=True] weights,
- int minp, bint avg=True):
- """
- Assume len(weights) << len(input)
- """
- cdef:
- ndarray[double_t] output, tot_wgt, counts
- Py_ssize_t in_i, win_i, win_n, win_k, in_n, in_k
- float64_t val_in, val_win, c, w
-
- in_n = len(input)
- win_n = len(weights)
- output = np.zeros(in_n, dtype=float)
- counts = np.zeros(in_n, dtype=float)
- if avg:
- tot_wgt = np.zeros(in_n, dtype=float)
-
- minp = _check_minp(len(weights), minp, in_n)
-
- if avg:
- for win_i from 0 <= win_i < win_n:
- val_win = weights[win_i]
- if val_win != val_win:
- continue
-
- for in_i from 0 <= in_i < in_n - (win_n - win_i) + 1:
- val_in = input[in_i]
- if val_in == val_in:
- output[in_i + (win_n - win_i) - 1] += val_in * val_win
- counts[in_i + (win_n - win_i) - 1] += 1
- tot_wgt[in_i + (win_n - win_i) - 1] += val_win
-
- for in_i from 0 <= in_i < in_n:
- c = counts[in_i]
- if c < minp:
- output[in_i] = NaN
- else:
- w = tot_wgt[in_i]
- if w == 0:
- output[in_i] = NaN
- else:
- output[in_i] /= tot_wgt[in_i]
-
- else:
- for win_i from 0 <= win_i < win_n:
- val_win = weights[win_i]
- if val_win != val_win:
- continue
-
- for in_i from 0 <= in_i < in_n - (win_n - win_i) + 1:
- val_in = input[in_i]
-
- if val_in == val_in:
- output[in_i + (win_n - win_i) - 1] += val_in * val_win
- counts[in_i + (win_n - win_i) - 1] += 1
-
- for in_i from 0 <= in_i < in_n:
- c = counts[in_i]
- if c < minp:
- output[in_i] = NaN
-
- return output
-
-
#----------------------------------------------------------------------
# group operations
-
@cython.wraparound(False)
@cython.boundscheck(False)
def is_lexsorted(list list_of_arrays):
diff --git a/pandas/core/window.py b/pandas/core/window.py
index bf3fd69c6340b..fbc56335aabd9 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -16,7 +16,7 @@
from pandas.core.base import (PandasObject, SelectionMixin,
GroupByMixin)
import pandas.core.common as com
-import pandas.algos as algos
+import pandas._window as _window
from pandas import compat
from pandas.compat.numpy import function as nv
from pandas.util.decorators import Substitution, Appender
@@ -407,9 +407,10 @@ def _apply_window(self, mean=True, how=None, **kwargs):
def f(arg, *args, **kwargs):
minp = _use_window(self.min_periods, len(window))
- return algos.roll_window(np.concatenate((arg, additional_nans))
- if center else arg, window, minp,
- avg=mean)
+ return _window.roll_window(np.concatenate((arg,
+ additional_nans))
+ if center else arg, window, minp,
+ avg=mean)
result = np.apply_along_axis(f, self.axis, values)
@@ -532,11 +533,10 @@ def _apply(self, func, name=None, window=None, center=None,
# if we have a string function name, wrap it
if isinstance(func, compat.string_types):
- if not hasattr(algos, func):
+ cfunc = getattr(_window, func, None)
+ if cfunc is None:
raise ValueError("we do not support this function "
- "algos.{0}".format(func))
-
- cfunc = getattr(algos, func)
+ "in _window.{0}".format(func))
def func(arg, window, min_periods=None):
minp = check_minp(min_periods, window)
@@ -617,8 +617,8 @@ def apply(self, func, args=(), kwargs={}):
def f(arg, window, min_periods):
minp = _use_window(min_periods, window)
- return algos.roll_generic(arg, window, minp, offset, func, args,
- kwargs)
+ return _window.roll_generic(arg, window, minp, offset, func, args,
+ kwargs)
return self._apply(f, func, args=args, kwargs=kwargs,
center=False)
@@ -687,7 +687,7 @@ def std(self, ddof=1, *args, **kwargs):
def f(arg, *args, **kwargs):
minp = _require_min_periods(1)(self.min_periods, window)
- return _zsqrt(algos.roll_var(arg, window, minp, ddof))
+ return _zsqrt(_window.roll_var(arg, window, minp, ddof))
return self._apply(f, 'std', check_minp=_require_min_periods(1),
ddof=ddof, **kwargs)
@@ -732,7 +732,7 @@ def quantile(self, quantile, **kwargs):
def f(arg, *args, **kwargs):
minp = _use_window(self.min_periods, window)
- return algos.roll_quantile(arg, window, minp, quantile)
+ return _window.roll_quantile(arg, window, minp, quantile)
return self._apply(f, 'quantile', quantile=quantile,
**kwargs)
@@ -1278,11 +1278,10 @@ def _apply(self, func, how=None, **kwargs):
# if we have a string function name, wrap it
if isinstance(func, compat.string_types):
- if not hasattr(algos, func):
+ cfunc = getattr(_window, func, None)
+ if cfunc is None:
raise ValueError("we do not support this function "
- "algos.{0}".format(func))
-
- cfunc = getattr(algos, func)
+ "in _window.{0}".format(func))
def func(arg):
return cfunc(arg, self.com, int(self.adjust),
@@ -1317,9 +1316,9 @@ def var(self, bias=False, *args, **kwargs):
nv.validate_window_func('var', args, kwargs)
def f(arg):
- return algos.ewmcov(arg, arg, self.com, int(self.adjust),
- int(self.ignore_na), int(self.min_periods),
- int(bias))
+ return _window.ewmcov(arg, arg, self.com, int(self.adjust),
+ int(self.ignore_na), int(self.min_periods),
+ int(bias))
return self._apply(f, **kwargs)
@@ -1337,9 +1336,9 @@ def cov(self, other=None, pairwise=None, bias=False, **kwargs):
def _get_cov(X, Y):
X = self._shallow_copy(X)
Y = self._shallow_copy(Y)
- cov = algos.ewmcov(X._prep_values(), Y._prep_values(), self.com,
- int(self.adjust), int(self.ignore_na),
- int(self.min_periods), int(bias))
+ cov = _window.ewmcov(X._prep_values(), Y._prep_values(), self.com,
+ int(self.adjust), int(self.ignore_na),
+ int(self.min_periods), int(bias))
return X._wrap_result(cov)
return _flex_binary_moment(self._selected_obj, other._selected_obj,
@@ -1361,9 +1360,10 @@ def _get_corr(X, Y):
Y = self._shallow_copy(Y)
def _cov(x, y):
- return algos.ewmcov(x, y, self.com, int(self.adjust),
- int(self.ignore_na), int(self.min_periods),
- 1)
+ return _window.ewmcov(x, y, self.com, int(self.adjust),
+ int(self.ignore_na),
+ int(self.min_periods),
+ 1)
x_values = X._prep_values()
y_values = Y._prep_values()
diff --git a/pandas/lib.pxd b/pandas/lib.pxd
index ba52e4cc47c89..36c91faa00036 100644
--- a/pandas/lib.pxd
+++ b/pandas/lib.pxd
@@ -1 +1,3 @@
+# prototypes for sharing
+
cdef bint is_null_datetimelike(v)
diff --git a/pandas/src/util.pxd b/pandas/src/util.pxd
index 84b331f1e8e6f..96a23a91cc7c2 100644
--- a/pandas/src/util.pxd
+++ b/pandas/src/util.pxd
@@ -24,6 +24,20 @@ cdef extern from "numpy_helper.h":
object sarr_from_data(cnp.dtype, int length, void* data)
inline object unbox_if_zerodim(object arr)
+ctypedef fused numeric:
+ cnp.int8_t
+ cnp.int16_t
+ cnp.int32_t
+ cnp.int64_t
+
+ cnp.uint8_t
+ cnp.uint16_t
+ cnp.uint32_t
+ cnp.uint64_t
+
+ cnp.float32_t
+ cnp.float64_t
+
cdef inline object get_value_at(ndarray arr, object loc):
cdef:
Py_ssize_t i, sz
diff --git a/pandas/window.pyx b/pandas/window.pyx
new file mode 100644
index 0000000000000..bfe9152477a40
--- /dev/null
+++ b/pandas/window.pyx
@@ -0,0 +1,954 @@
+from numpy cimport *
+cimport numpy as np
+import numpy as np
+
+cimport cython
+
+import_array()
+
+cimport util
+
+from libc.stdlib cimport malloc, free
+
+from numpy cimport NPY_INT8 as NPY_int8
+from numpy cimport NPY_INT16 as NPY_int16
+from numpy cimport NPY_INT32 as NPY_int32
+from numpy cimport NPY_INT64 as NPY_int64
+from numpy cimport NPY_FLOAT16 as NPY_float16
+from numpy cimport NPY_FLOAT32 as NPY_float32
+from numpy cimport NPY_FLOAT64 as NPY_float64
+
+from numpy cimport (int8_t, int16_t, int32_t, int64_t, uint8_t, uint16_t,
+ uint32_t, uint64_t, float16_t, float32_t, float64_t)
+
+int8 = np.dtype(np.int8)
+int16 = np.dtype(np.int16)
+int32 = np.dtype(np.int32)
+int64 = np.dtype(np.int64)
+float16 = np.dtype(np.float16)
+float32 = np.dtype(np.float32)
+float64 = np.dtype(np.float64)
+
+cdef np.int8_t MINint8 = np.iinfo(np.int8).min
+cdef np.int16_t MINint16 = np.iinfo(np.int16).min
+cdef np.int32_t MINint32 = np.iinfo(np.int32).min
+cdef np.int64_t MINint64 = np.iinfo(np.int64).min
+cdef np.float16_t MINfloat16 = np.NINF
+cdef np.float32_t MINfloat32 = np.NINF
+cdef np.float64_t MINfloat64 = np.NINF
+
+cdef np.int8_t MAXint8 = np.iinfo(np.int8).max
+cdef np.int16_t MAXint16 = np.iinfo(np.int16).max
+cdef np.int32_t MAXint32 = np.iinfo(np.int32).max
+cdef np.int64_t MAXint64 = np.iinfo(np.int64).max
+cdef np.float16_t MAXfloat16 = np.inf
+cdef np.float32_t MAXfloat32 = np.inf
+cdef np.float64_t MAXfloat64 = np.inf
+
+cdef double NaN = <double> np.NaN
+cdef double nan = NaN
+
+cdef inline int int_max(int a, int b): return a if a >= b else b
+cdef inline int int_min(int a, int b): return a if a <= b else b
+
+# this is our util.pxd
+from util cimport numeric
+
+cdef extern from "src/headers/math.h":
+ double sqrt(double x) nogil
+ int signbit(double) nogil
+
+include "skiplist.pyx"
+
+# Cython implementations of rolling sum, mean, variance, skewness,
+# other statistical moment functions
+#
+# Misc implementation notes
+# -------------------------
+#
+# - In Cython x * x is faster than x ** 2 for C types, this should be
+# periodically revisited to see if it's still true.
+#
+# -
+
+def _check_minp(win, minp, N, floor=1):
+ if minp > win:
+ raise ValueError('min_periods (%d) must be <= window (%d)'
+ % (minp, win))
+ elif minp > N:
+ minp = N + 1
+ elif minp < 0:
+ raise ValueError('min_periods must be >= 0')
+ return max(minp, floor)
+
+# original C implementation by N. Devillard.
+# This code in public domain.
+# Function : kth_smallest()
+# In : array of elements, # of elements in the array, rank k
+# Out : one element
+# Job : find the kth smallest element in the array
+
+# Reference:
+
+# Author: Wirth, Niklaus
+# Title: Algorithms + data structures = programs
+# Publisher: Englewood Cliffs: Prentice-Hall, 1976
+# Physical description: 366 p.
+# Series: Prentice-Hall Series in Automatic Computation
+
+#-------------------------------------------------------------------------------
+# Rolling sum
[email protected](False)
[email protected](False)
+def roll_sum(ndarray[double_t] input, int win, int minp):
+ cdef double val, prev, sum_x = 0
+ cdef int nobs = 0, i
+ cdef int N = len(input)
+
+ cdef ndarray[double_t] output = np.empty(N, dtype=float)
+
+ minp = _check_minp(win, minp, N)
+ with nogil:
+ for i from 0 <= i < minp - 1:
+ val = input[i]
+
+ # Not NaN
+ if val == val:
+ nobs += 1
+ sum_x += val
+
+ output[i] = NaN
+
+ for i from minp - 1 <= i < N:
+ val = input[i]
+
+ if val == val:
+ nobs += 1
+ sum_x += val
+
+ if i > win - 1:
+ prev = input[i - win]
+ if prev == prev:
+ sum_x -= prev
+ nobs -= 1
+
+ if nobs >= minp:
+ output[i] = sum_x
+ else:
+ output[i] = NaN
+
+ return output
+
+#-------------------------------------------------------------------------------
+# Rolling mean
[email protected](False)
[email protected](False)
+def roll_mean(ndarray[double_t] input,
+ int win, int minp):
+ cdef:
+ double val, prev, result, sum_x = 0
+ Py_ssize_t nobs = 0, i, neg_ct = 0
+ Py_ssize_t N = len(input)
+
+ cdef ndarray[double_t] output = np.empty(N, dtype=float)
+ minp = _check_minp(win, minp, N)
+ with nogil:
+ for i from 0 <= i < minp - 1:
+ val = input[i]
+
+ # Not NaN
+ if val == val:
+ nobs += 1
+ sum_x += val
+ if signbit(val):
+ neg_ct += 1
+
+ output[i] = NaN
+
+ for i from minp - 1 <= i < N:
+ val = input[i]
+
+ if val == val:
+ nobs += 1
+ sum_x += val
+ if signbit(val):
+ neg_ct += 1
+
+ if i > win - 1:
+ prev = input[i - win]
+ if prev == prev:
+ sum_x -= prev
+ nobs -= 1
+ if signbit(prev):
+ neg_ct -= 1
+
+ if nobs >= minp:
+ result = sum_x / nobs
+ if neg_ct == 0 and result < 0:
+ # all positive
+ output[i] = 0
+ elif neg_ct == nobs and result > 0:
+ # all negative
+ output[i] = 0
+ else:
+ output[i] = result
+ else:
+ output[i] = NaN
+
+ return output
+
+#-------------------------------------------------------------------------------
+# Exponentially weighted moving average
+
+def ewma(ndarray[double_t] input, double_t com, int adjust, int ignore_na, int minp):
+ """
+ Compute exponentially-weighted moving average using center-of-mass.
+
+ Parameters
+ ----------
+ input : ndarray (float64 type)
+ com : float64
+ adjust: int
+ ignore_na: int
+ minp: int
+
+ Returns
+ -------
+ y : ndarray
+ """
+
+ cdef Py_ssize_t N = len(input)
+ cdef ndarray[double_t] output = np.empty(N, dtype=float)
+ if N == 0:
+ return output
+
+ minp = max(minp, 1)
+
+ cdef double alpha, old_wt_factor, new_wt, weighted_avg, old_wt, cur
+ cdef Py_ssize_t i, nobs
+
+ alpha = 1. / (1. + com)
+ old_wt_factor = 1. - alpha
+ new_wt = 1. if adjust else alpha
+
+ weighted_avg = input[0]
+ is_observation = (weighted_avg == weighted_avg)
+ nobs = int(is_observation)
+ output[0] = weighted_avg if (nobs >= minp) else NaN
+ old_wt = 1.
+
+ for i from 1 <= i < N:
+ cur = input[i]
+ is_observation = (cur == cur)
+ nobs += int(is_observation)
+ if weighted_avg == weighted_avg:
+ if is_observation or (not ignore_na):
+ old_wt *= old_wt_factor
+ if is_observation:
+ if weighted_avg != cur: # avoid numerical errors on constant series
+ weighted_avg = ((old_wt * weighted_avg) + (new_wt * cur)) / (old_wt + new_wt)
+ if adjust:
+ old_wt += new_wt
+ else:
+ old_wt = 1.
+ elif is_observation:
+ weighted_avg = cur
+
+ output[i] = weighted_avg if (nobs >= minp) else NaN
+
+ return output
+
+#-------------------------------------------------------------------------------
+# Exponentially weighted moving covariance
+
+def ewmcov(ndarray[double_t] input_x, ndarray[double_t] input_y,
+ double_t com, int adjust, int ignore_na, int minp, int bias):
+ """
+ Compute exponentially-weighted moving variance using center-of-mass.
+
+ Parameters
+ ----------
+ input_x : ndarray (float64 type)
+ input_y : ndarray (float64 type)
+ com : float64
+ adjust: int
+ ignore_na: int
+ minp: int
+ bias: int
+
+ Returns
+ -------
+ y : ndarray
+ """
+
+ cdef Py_ssize_t N = len(input_x)
+ if len(input_y) != N:
+ raise ValueError('arrays are of different lengths (%d and %d)' % (N, len(input_y)))
+ cdef ndarray[double_t] output = np.empty(N, dtype=float)
+ if N == 0:
+ return output
+
+ minp = max(minp, 1)
+
+ cdef double alpha, old_wt_factor, new_wt, mean_x, mean_y, cov
+ cdef double sum_wt, sum_wt2, old_wt, cur_x, cur_y, old_mean_x, old_mean_y
+ cdef Py_ssize_t i, nobs
+
+ alpha = 1. / (1. + com)
+ old_wt_factor = 1. - alpha
+ new_wt = 1. if adjust else alpha
+
+ mean_x = input_x[0]
+ mean_y = input_y[0]
+ is_observation = ((mean_x == mean_x) and (mean_y == mean_y))
+ nobs = int(is_observation)
+ if not is_observation:
+ mean_x = NaN
+ mean_y = NaN
+ output[0] = (0. if bias else NaN) if (nobs >= minp) else NaN
+ cov = 0.
+ sum_wt = 1.
+ sum_wt2 = 1.
+ old_wt = 1.
+
+ for i from 1 <= i < N:
+ cur_x = input_x[i]
+ cur_y = input_y[i]
+ is_observation = ((cur_x == cur_x) and (cur_y == cur_y))
+ nobs += int(is_observation)
+ if mean_x == mean_x:
+ if is_observation or (not ignore_na):
+ sum_wt *= old_wt_factor
+ sum_wt2 *= (old_wt_factor * old_wt_factor)
+ old_wt *= old_wt_factor
+ if is_observation:
+ old_mean_x = mean_x
+ old_mean_y = mean_y
+ if mean_x != cur_x: # avoid numerical errors on constant series
+ mean_x = ((old_wt * old_mean_x) + (new_wt * cur_x)) / (old_wt + new_wt)
+ if mean_y != cur_y: # avoid numerical errors on constant series
+ mean_y = ((old_wt * old_mean_y) + (new_wt * cur_y)) / (old_wt + new_wt)
+ cov = ((old_wt * (cov + ((old_mean_x - mean_x) * (old_mean_y - mean_y)))) +
+ (new_wt * ((cur_x - mean_x) * (cur_y - mean_y)))) / (old_wt + new_wt)
+ sum_wt += new_wt
+ sum_wt2 += (new_wt * new_wt)
+ old_wt += new_wt
+ if not adjust:
+ sum_wt /= old_wt
+ sum_wt2 /= (old_wt * old_wt)
+ old_wt = 1.
+ elif is_observation:
+ mean_x = cur_x
+ mean_y = cur_y
+
+ if nobs >= minp:
+ if not bias:
+ numerator = sum_wt * sum_wt
+ denominator = numerator - sum_wt2
+ output[i] = ((numerator / denominator) * cov) if (denominator > 0.) else NaN
+ else:
+ output[i] = cov
+ else:
+ output[i] = NaN
+
+ return output
+
+#----------------------------------------------------------------------
+# Rolling variance
+
[email protected](False)
[email protected](False)
+def roll_var(ndarray[double_t] input, int win, int minp, int ddof=1):
+ """
+ Numerically stable implementation using Welford's method.
+ """
+ cdef double val, prev, mean_x = 0, ssqdm_x = 0, nobs = 0, delta
+ cdef Py_ssize_t i
+ cdef Py_ssize_t N = len(input)
+
+ cdef ndarray[double_t] output = np.empty(N, dtype=float)
+
+ minp = _check_minp(win, minp, N)
+
+ # Check for windows larger than array, addresses #7297
+ win = min(win, N)
+
+ with nogil:
+ # Over the first window, observations can only be added, never removed
+ for i from 0 <= i < win:
+ val = input[i]
+
+ # Not NaN
+ if val == val:
+ nobs += 1
+ delta = (val - mean_x)
+ mean_x += delta / nobs
+ ssqdm_x += delta * (val - mean_x)
+
+ if (nobs >= minp) and (nobs > ddof):
+ #pathological case
+ if nobs == 1:
+ val = 0
+ else:
+ val = ssqdm_x / (nobs - ddof)
+ if val < 0:
+ val = 0
+ else:
+ val = NaN
+
+ output[i] = val
+
+ # After the first window, observations can both be added and removed
+ for i from win <= i < N:
+ val = input[i]
+ prev = input[i - win]
+
+ if val == val:
+ if prev == prev:
+ # Adding one observation and removing another one
+ delta = val - prev
+ prev -= mean_x
+ mean_x += delta / nobs
+ val -= mean_x
+ ssqdm_x += (val + prev) * delta
+ else:
+ # Adding one observation and not removing any
+ nobs += 1
+ delta = (val - mean_x)
+ mean_x += delta / nobs
+ ssqdm_x += delta * (val - mean_x)
+ elif prev == prev:
+ # Adding no new observation, but removing one
+ nobs -= 1
+ if nobs:
+ delta = (prev - mean_x)
+ mean_x -= delta / nobs
+ ssqdm_x -= delta * (prev - mean_x)
+ else:
+ mean_x = 0
+ ssqdm_x = 0
+ # Variance is unchanged if no observation is added or removed
+
+ if (nobs >= minp) and (nobs > ddof):
+ #pathological case
+ if nobs == 1:
+ val = 0
+ else:
+ val = ssqdm_x / (nobs - ddof)
+ if val < 0:
+ val = 0
+ else:
+ val = NaN
+
+ output[i] = val
+
+ return output
+
+
+#-------------------------------------------------------------------------------
+# Rolling skewness
[email protected](False)
[email protected](False)
+def roll_skew(ndarray[double_t] input, int win, int minp):
+ cdef double val, prev
+ cdef double x = 0, xx = 0, xxx = 0
+ cdef Py_ssize_t nobs = 0, i
+ cdef Py_ssize_t N = len(input)
+
+ cdef ndarray[double_t] output = np.empty(N, dtype=float)
+
+ # 3 components of the skewness equation
+ cdef double A, B, C, R
+
+ minp = _check_minp(win, minp, N)
+ with nogil:
+ for i from 0 <= i < minp - 1:
+ val = input[i]
+
+ # Not NaN
+ if val == val:
+ nobs += 1
+ x += val
+ xx += val * val
+ xxx += val * val * val
+
+ output[i] = NaN
+
+ for i from minp - 1 <= i < N:
+ val = input[i]
+
+ if val == val:
+ nobs += 1
+ x += val
+ xx += val * val
+ xxx += val * val * val
+
+ if i > win - 1:
+ prev = input[i - win]
+ if prev == prev:
+ x -= prev
+ xx -= prev * prev
+ xxx -= prev * prev * prev
+
+ nobs -= 1
+ if nobs >= minp:
+ A = x / nobs
+ B = xx / nobs - A * A
+ C = xxx / nobs - A * A * A - 3 * A * B
+ if B <= 0 or nobs < 3:
+ output[i] = NaN
+ else:
+ R = sqrt(B)
+ output[i] = ((sqrt(nobs * (nobs - 1.)) * C) /
+ ((nobs-2) * R * R * R))
+ else:
+ output[i] = NaN
+
+ return output
+
+#-------------------------------------------------------------------------------
+# Rolling kurtosis
[email protected](False)
[email protected](False)
+def roll_kurt(ndarray[double_t] input,
+ int win, int minp):
+ cdef double val, prev
+ cdef double x = 0, xx = 0, xxx = 0, xxxx = 0
+ cdef Py_ssize_t nobs = 0, i
+ cdef Py_ssize_t N = len(input)
+
+ cdef ndarray[double_t] output = np.empty(N, dtype=float)
+
+ # 5 components of the kurtosis equation
+ cdef double A, B, C, D, R, K
+
+ minp = _check_minp(win, minp, N)
+ with nogil:
+ for i from 0 <= i < minp - 1:
+ val = input[i]
+
+ # Not NaN
+ if val == val:
+ nobs += 1
+
+ # seriously don't ask me why this is faster
+ x += val
+ xx += val * val
+ xxx += val * val * val
+ xxxx += val * val * val * val
+
+ output[i] = NaN
+
+ for i from minp - 1 <= i < N:
+ val = input[i]
+
+ if val == val:
+ nobs += 1
+ x += val
+ xx += val * val
+ xxx += val * val * val
+ xxxx += val * val * val * val
+
+ if i > win - 1:
+ prev = input[i - win]
+ if prev == prev:
+ x -= prev
+ xx -= prev * prev
+ xxx -= prev * prev * prev
+ xxxx -= prev * prev * prev * prev
+
+ nobs -= 1
+
+ if nobs >= minp:
+ A = x / nobs
+ R = A * A
+ B = xx / nobs - R
+ R = R * A
+ C = xxx / nobs - R - 3 * A * B
+ R = R * A
+ D = xxxx / nobs - R - 6*B*A*A - 4*C*A
+
+ if B == 0 or nobs < 4:
+ output[i] = NaN
+
+ else:
+ K = (nobs * nobs - 1.)*D/(B*B) - 3*((nobs-1.)**2)
+ K = K / ((nobs - 2.)*(nobs-3.))
+
+ output[i] = K
+
+ else:
+ output[i] = NaN
+
+ return output
+
+#-------------------------------------------------------------------------------
+# Rolling median, min, max
+
+from skiplist cimport *
+
[email protected](False)
[email protected](False)
+def roll_median_c(ndarray[float64_t] arg, int win, int minp):
+ cdef:
+ double val, res, prev
+ bint err=0
+ int ret=0
+ skiplist_t *sl
+ Py_ssize_t midpoint, nobs = 0, i
+
+
+ cdef Py_ssize_t N = len(arg)
+ cdef ndarray[double_t] output = np.empty(N, dtype=float)
+
+ sl = skiplist_init(win)
+ if sl == NULL:
+ raise MemoryError("skiplist_init failed")
+
+ minp = _check_minp(win, minp, N)
+
+ with nogil:
+ for i from 0 <= i < minp - 1:
+ val = arg[i]
+
+ # Not NaN
+ if val == val:
+ nobs += 1
+ err = skiplist_insert(sl, val) != 1
+ if err:
+ break
+ output[i] = NaN
+
+ with nogil:
+ if not err:
+ for i from minp - 1 <= i < N:
+
+ val = arg[i]
+
+ if i > win - 1:
+ prev = arg[i - win]
+
+ if prev == prev:
+ skiplist_remove(sl, prev)
+ nobs -= 1
+
+ if val == val:
+ nobs += 1
+ err = skiplist_insert(sl, val) != 1
+ if err:
+ break
+
+ if nobs >= minp:
+ midpoint = nobs / 2
+ if nobs % 2:
+ res = skiplist_get(sl, midpoint, &ret)
+ else:
+ res = (skiplist_get(sl, midpoint, &ret) +
+ skiplist_get(sl, (midpoint - 1), &ret)) / 2
+ else:
+ res = NaN
+
+ output[i] = res
+
+ skiplist_destroy(sl)
+ if err:
+ raise MemoryError("skiplist_insert failed")
+ return output
+
+#----------------------------------------------------------------------
+
+# Moving maximum / minimum code taken from Bottleneck under the terms
+# of its Simplified BSD license
+# https://github.com/kwgoodman/bottleneck
+
[email protected](False)
[email protected](False)
+def roll_max(ndarray[numeric] a, int window, int minp):
+ """
+ Moving max of 1d array of any numeric type along axis=0 ignoring NaNs.
+
+ Parameters
+ ----------
+ a: numpy array
+ window: int, size of rolling window
+ minp: if number of observations in window
+ is below this, output a NaN
+ """
+ return _roll_min_max(a, window, minp, 1)
+
[email protected](False)
[email protected](False)
+def roll_min(ndarray[numeric] a, int window, int minp):
+ """
+ Moving max of 1d array of any numeric type along axis=0 ignoring NaNs.
+
+ Parameters
+ ----------
+ a: numpy array
+ window: int, size of rolling window
+ minp: if number of observations in window
+ is below this, output a NaN
+ """
+ return _roll_min_max(a, window, minp, 0)
+
[email protected](False)
[email protected](False)
+cdef _roll_min_max(ndarray[numeric] a, int window, int minp, bint is_max):
+ "Moving min/max of 1d array of any numeric type along axis=0 ignoring NaNs."
+ cdef numeric ai, aold
+ cdef Py_ssize_t count
+ cdef Py_ssize_t* death
+ cdef numeric* ring
+ cdef numeric* minvalue
+ cdef numeric* end
+ cdef numeric* last
+ cdef Py_ssize_t i0
+ cdef np.npy_intp *dim
+ dim = PyArray_DIMS(a)
+ cdef Py_ssize_t n0 = dim[0]
+ cdef np.npy_intp *dims = [n0]
+ cdef bint should_replace
+ cdef np.ndarray[numeric, ndim=1] y = PyArray_EMPTY(1, dims, PyArray_TYPE(a), 0)
+
+ if window < 1:
+ raise ValueError('Invalid window size %d'
+ % (window))
+
+ if minp > window:
+ raise ValueError('Invalid min_periods size %d greater than window %d'
+ % (minp, window))
+
+ minp = _check_minp(window, minp, n0)
+ with nogil:
+ ring = <numeric*>malloc(window * sizeof(numeric))
+ death = <Py_ssize_t*>malloc(window * sizeof(Py_ssize_t))
+ end = ring + window
+ last = ring
+
+ minvalue = ring
+ ai = a[0]
+ if numeric in cython.floating:
+ if ai == ai:
+ minvalue[0] = ai
+ elif is_max:
+ minvalue[0] = MINfloat64
+ else:
+ minvalue[0] = MAXfloat64
+ else:
+ minvalue[0] = ai
+ death[0] = window
+
+ count = 0
+ for i0 in range(n0):
+ ai = a[i0]
+ if numeric in cython.floating:
+ if ai == ai:
+ count += 1
+ elif is_max:
+ ai = MINfloat64
+ else:
+ ai = MAXfloat64
+ else:
+ count += 1
+ if i0 >= window:
+ aold = a[i0 - window]
+ if aold == aold:
+ count -= 1
+ if death[minvalue-ring] == i0:
+ minvalue += 1
+ if minvalue >= end:
+ minvalue = ring
+ should_replace = ai >= minvalue[0] if is_max else ai <= minvalue[0]
+ if should_replace:
+ minvalue[0] = ai
+ death[minvalue-ring] = i0 + window
+ last = minvalue
+ else:
+ should_replace = last[0] <= ai if is_max else last[0] >= ai
+ while should_replace:
+ if last == ring:
+ last = end
+ last -= 1
+ should_replace = last[0] <= ai if is_max else last[0] >= ai
+ last += 1
+ if last == end:
+ last = ring
+ last[0] = ai
+ death[last - ring] = i0 + window
+ if numeric in cython.floating:
+ if count >= minp:
+ y[i0] = minvalue[0]
+ else:
+ y[i0] = NaN
+ else:
+ y[i0] = minvalue[0]
+
+ for i0 in range(minp - 1):
+ if numeric in cython.floating:
+ y[i0] = NaN
+ else:
+ y[i0] = 0
+
+ free(ring)
+ free(death)
+ return y
+
+def roll_quantile(ndarray[float64_t, cast=True] input, int win,
+ int minp, double quantile):
+ """
+ O(N log(window)) implementation using skip list
+ """
+ cdef double val, prev, midpoint
+ cdef IndexableSkiplist skiplist
+ cdef Py_ssize_t nobs = 0, i
+ cdef Py_ssize_t N = len(input)
+ cdef ndarray[double_t] output = np.empty(N, dtype=float)
+
+ skiplist = IndexableSkiplist(win)
+
+ minp = _check_minp(win, minp, N)
+
+ for i from 0 <= i < minp - 1:
+ val = input[i]
+
+ # Not NaN
+ if val == val:
+ nobs += 1
+ skiplist.insert(val)
+
+ output[i] = NaN
+
+ for i from minp - 1 <= i < N:
+ val = input[i]
+
+ if i > win - 1:
+ prev = input[i - win]
+
+ if prev == prev:
+ skiplist.remove(prev)
+ nobs -= 1
+
+ if val == val:
+ nobs += 1
+ skiplist.insert(val)
+
+ if nobs >= minp:
+ idx = int((quantile / 1.) * (nobs - 1))
+ output[i] = skiplist.get(idx)
+ else:
+ output[i] = NaN
+
+ return output
+
+def roll_generic(ndarray[float64_t, cast=True] input,
+ int win, int minp, int offset,
+ object func, object args, object kwargs):
+ cdef ndarray[double_t] output, counts, bufarr
+ cdef Py_ssize_t i, n
+ cdef float64_t *buf
+ cdef float64_t *oldbuf
+
+ if not input.flags.c_contiguous:
+ input = input.copy('C')
+
+ n = len(input)
+ if n == 0:
+ return input
+
+ minp = _check_minp(win, minp, n, floor=0)
+ output = np.empty(n, dtype=float)
+ counts = roll_sum(np.concatenate((np.isfinite(input).astype(float), np.array([0.] * offset))), win, minp)[offset:]
+
+ # truncated windows at the beginning, through first full-length window
+ for i from 0 <= i < (int_min(win, n) - offset):
+ if counts[i] >= minp:
+ output[i] = func(input[0 : (i + offset + 1)], *args, **kwargs)
+ else:
+ output[i] = NaN
+
+ # remaining full-length windows
+ buf = <float64_t*> input.data
+ bufarr = np.empty(win, dtype=float)
+ oldbuf = <float64_t*> bufarr.data
+ for i from (win - offset) <= i < (n - offset):
+ buf = buf + 1
+ bufarr.data = <char*> buf
+ if counts[i] >= minp:
+ output[i] = func(bufarr, *args, **kwargs)
+ else:
+ output[i] = NaN
+ bufarr.data = <char*> oldbuf
+
+ # truncated windows at the end
+ for i from int_max(n - offset, 0) <= i < n:
+ if counts[i] >= minp:
+ output[i] = func(input[int_max(i + offset - win + 1, 0) : n], *args, **kwargs)
+ else:
+ output[i] = NaN
+
+ return output
+
+
+def roll_window(ndarray[float64_t, ndim=1, cast=True] input,
+ ndarray[float64_t, ndim=1, cast=True] weights,
+ int minp, bint avg=True):
+ """
+ Assume len(weights) << len(input)
+ """
+ cdef:
+ ndarray[double_t] output, tot_wgt, counts
+ Py_ssize_t in_i, win_i, win_n, win_k, in_n, in_k
+ float64_t val_in, val_win, c, w
+
+ in_n = len(input)
+ win_n = len(weights)
+ output = np.zeros(in_n, dtype=float)
+ counts = np.zeros(in_n, dtype=float)
+ if avg:
+ tot_wgt = np.zeros(in_n, dtype=float)
+
+ minp = _check_minp(len(weights), minp, in_n)
+
+ if avg:
+ for win_i from 0 <= win_i < win_n:
+ val_win = weights[win_i]
+ if val_win != val_win:
+ continue
+
+ for in_i from 0 <= in_i < in_n - (win_n - win_i) + 1:
+ val_in = input[in_i]
+ if val_in == val_in:
+ output[in_i + (win_n - win_i) - 1] += val_in * val_win
+ counts[in_i + (win_n - win_i) - 1] += 1
+ tot_wgt[in_i + (win_n - win_i) - 1] += val_win
+
+ for in_i from 0 <= in_i < in_n:
+ c = counts[in_i]
+ if c < minp:
+ output[in_i] = NaN
+ else:
+ w = tot_wgt[in_i]
+ if w == 0:
+ output[in_i] = NaN
+ else:
+ output[in_i] /= tot_wgt[in_i]
+
+ else:
+ for win_i from 0 <= win_i < win_n:
+ val_win = weights[win_i]
+ if val_win != val_win:
+ continue
+
+ for in_i from 0 <= in_i < in_n - (win_n - win_i) + 1:
+ val_in = input[in_i]
+
+ if val_in == val_in:
+ output[in_i + (win_n - win_i) - 1] += val_in * val_win
+ counts[in_i + (win_n - win_i) - 1] += 1
+
+ for in_i from 0 <= in_i < in_n:
+ c = counts[in_i]
+ if c < minp:
+ output[in_i] = NaN
+
+ return output
diff --git a/setup.py b/setup.py
index 596fe62ff0781..1d189364239a9 100755
--- a/setup.py
+++ b/setup.py
@@ -270,6 +270,7 @@ class CheckSDist(sdist_class):
'pandas/tslib.pyx',
'pandas/index.pyx',
'pandas/algos.pyx',
+ 'pandas/window.pyx',
'pandas/parser.pyx',
'pandas/src/period.pyx',
'pandas/src/sparse.pyx',
@@ -425,17 +426,23 @@ def pxd(name):
'sources': ['pandas/src/datetime/np_datetime.c',
'pandas/src/datetime/np_datetime_strings.c']},
algos={'pyxfile': 'algos',
- 'pxdfiles': ['src/skiplist'],
+ 'pxdfiles': ['src/util'],
'depends': [srcpath('generated', suffix='.pyx'),
- srcpath('join', suffix='.pyx'),
- 'pandas/src/skiplist.pyx',
- 'pandas/src/skiplist.h']},
+ srcpath('join', suffix='.pyx')]},
+ _window={'pyxfile': 'window',
+ 'pxdfiles': ['src/skiplist','src/util'],
+ 'depends': ['pandas/src/skiplist.pyx',
+ 'pandas/src/skiplist.h']},
parser={'pyxfile': 'parser',
'depends': ['pandas/src/parser/tokenizer.h',
'pandas/src/parser/io.h',
'pandas/src/numpy_helper.h'],
'sources': ['pandas/src/parser/tokenizer.c',
'pandas/src/parser/io.c']},
+ _sparse={'pyxfile': 'src/sparse',
+ 'depends': [srcpath('sparse', suffix='.pyx')]},
+ _testing={'pyxfile': 'src/testing',
+ 'depends': [srcpath('testing', suffix='.pyx')]},
)
ext_data["io.sas.saslib"] = {'pyxfile': 'io/sas/saslib'}
@@ -461,22 +468,6 @@ def pxd(name):
extensions.append(obj)
-sparse_ext = Extension('pandas._sparse',
- sources=[srcpath('sparse', suffix=suffix)],
- include_dirs=[],
- libraries=libraries,
- extra_compile_args=extra_compile_args)
-
-extensions.extend([sparse_ext])
-
-testing_ext = Extension('pandas._testing',
- sources=[srcpath('testing', suffix=suffix)],
- include_dirs=[],
- libraries=libraries,
- extra_compile_args=extra_compile_args)
-
-extensions.extend([testing_ext])
-
#----------------------------------------------------------------------
# msgpack
| makes it a little simpler to iterate on pieces of the cython code.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13380 | 2016-06-06T13:15:42Z | 2016-06-06T14:33:23Z | 2016-06-06T14:33:23Z | 2016-06-06T14:33:23Z |
DOC: actually document float_precision in read_csv | diff --git a/doc/source/io.rst b/doc/source/io.rst
index f559c3cb3ebaf..6aa2df3549914 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -269,6 +269,10 @@ thousands : str, default ``None``
Thousands separator.
decimal : str, default ``'.'``
Character to recognize as decimal point. E.g. use ``','`` for European data.
+float_precision : string, default None
+ Specifies which converter the C engine should use for floating-point values.
+ The options are ``None`` for the ordinary converter, ``high`` for the
+ high-precision converter, and ``round_trip`` for the round-trip converter.
lineterminator : str (length 1), default ``None``
Character to break file into lines. Only valid with C parser.
quotechar : str (length 1)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index a851a5f48f5e6..04b488aff5c0c 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -183,6 +183,11 @@
Thousands separator
decimal : str, default '.'
Character to recognize as decimal point (e.g. use ',' for European data).
+float_precision : string, default None
+ Specifies which converter the C engine should use for floating-point
+ values. The options are `None` for the ordinary converter,
+ `high` for the high-precision converter, and `round_trip` for the
+ round-trip converter.
lineterminator : str (length 1), default None
Character to break file into lines. Only valid with C parser.
quotechar : str (length 1), optional
| So I wasn't 100% correct when I said that `float_precision` was documented <a href="https://github.com/pydata/pandas/issues/12686#issuecomment-222684918">here<a/>. It was well documented internally for `TextParser` and in a section for `io.rst`, but it wasn't listed formally in the parameters for the `read_csv` documentation.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13377 | 2016-06-06T08:56:35Z | 2016-06-06T12:09:42Z | null | 2016-06-06T12:15:04Z |
DOC: Fix wording/grammar for rolling's win_type argument. | diff --git a/pandas/core/window.py b/pandas/core/window.py
index cd66d4e30c351..bf3fd69c6340b 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -280,7 +280,7 @@ class Window(_Window):
center : boolean, default False
Set the labels at the center of the window.
win_type : string, default None
- prove a window type, see the notes below
+ Provide a window type. See the notes below.
axis : int, default 0
Returns
| I don't know the exact intended phrasing here, but I think the writer might have meant "Provide"?
| https://api.github.com/repos/pandas-dev/pandas/pulls/13376 | 2016-06-06T02:35:12Z | 2016-06-06T12:10:41Z | null | 2016-06-06T12:10:51Z |
ENH: astype() can now take col label -> dtype mapping as arg; GH7271 | diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt
index 0b9695125c0a9..10009f2ff8e43 100644
--- a/doc/source/whatsnew/v0.19.0.txt
+++ b/doc/source/whatsnew/v0.19.0.txt
@@ -267,6 +267,7 @@ API changes
- ``.filter()`` enforces mutual exclusion of the keyword arguments. (:issue:`12399`)
- ``PeridIndex`` can now accept ``list`` and ``array`` which contains ``pd.NaT`` (:issue:`13430`)
- ``__setitem__`` will no longer apply a callable rhs as a function instead of storing it. Call ``where`` directly to get the previous behavior. (:issue:`13299`)
+- ``astype()`` will now accept a dict of column name to data types mapping as the ``dtype`` argument. (:issue:`12086`)
.. _whatsnew_0190.api.tolist:
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index d6e6f571be53a..0c19ccbc40a9f 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1,4 +1,5 @@
# pylint: disable=W0231,E1101
+import collections
import warnings
import operator
import weakref
@@ -161,7 +162,7 @@ def _init_mgr(self, mgr, axes=None, dtype=None, copy=False):
@property
def _constructor(self):
- """Used when a manipulation result has the same dimesions as the
+ """Used when a manipulation result has the same dimensions as the
original.
"""
raise AbstractMethodError(self)
@@ -3001,7 +3002,11 @@ def astype(self, dtype, copy=True, raise_on_error=True, **kwargs):
Parameters
----------
- dtype : numpy.dtype or Python type
+ dtype : data type, or dict of column name -> data type
+ Use a numpy.dtype or Python type to cast entire pandas object to
+ the same type. Alternatively, use {col: dtype, ...}, where col is a
+ column label and dtype is a numpy.dtype or Python type to cast one
+ or more of the DataFrame's columns to column-specific types.
raise_on_error : raise on invalid input
kwargs : keyword arguments to pass on to the constructor
@@ -3009,10 +3014,36 @@ def astype(self, dtype, copy=True, raise_on_error=True, **kwargs):
-------
casted : type of caller
"""
+ if isinstance(dtype, collections.Mapping):
+ if self.ndim == 1: # i.e. Series
+ if len(dtype) > 1 or list(dtype.keys())[0] != self.name:
+ raise KeyError('Only the Series name can be used for '
+ 'the key in Series dtype mappings.')
+ new_type = list(dtype.values())[0]
+ return self.astype(new_type, copy, raise_on_error, **kwargs)
+ elif self.ndim > 2:
+ raise NotImplementedError(
+ 'astype() only accepts a dtype arg of type dict when '
+ 'invoked on Series and DataFrames. A single dtype must be '
+ 'specified when invoked on a Panel.'
+ )
+ for col_name in dtype.keys():
+ if col_name not in self:
+ raise KeyError('Only a column name can be used for the '
+ 'key in a dtype mappings argument.')
+ from pandas import concat
+ results = []
+ for col_name, col in self.iteritems():
+ if col_name in dtype:
+ results.append(col.astype(dtype[col_name], copy=copy))
+ else:
+ results.append(results.append(col.copy() if copy else col))
+ return concat(results, axis=1, copy=False)
- mgr = self._data.astype(dtype=dtype, copy=copy,
- raise_on_error=raise_on_error, **kwargs)
- return self._constructor(mgr).__finalize__(self)
+ # else, only a single dtype is given
+ new_data = self._data.astype(dtype=dtype, copy=copy,
+ raise_on_error=raise_on_error, **kwargs)
+ return self._constructor(new_data).__finalize__(self)
def copy(self, deep=True):
"""
diff --git a/pandas/tests/frame/test_dtypes.py b/pandas/tests/frame/test_dtypes.py
index c650436eefaf3..817770b9da610 100644
--- a/pandas/tests/frame/test_dtypes.py
+++ b/pandas/tests/frame/test_dtypes.py
@@ -5,7 +5,7 @@
import numpy as np
from pandas import (DataFrame, Series, date_range, Timedelta, Timestamp,
- compat, option_context)
+ compat, concat, option_context)
from pandas.compat import u
from pandas.types.dtypes import DatetimeTZDtype
from pandas.tests.frame.common import TestData
@@ -396,6 +396,69 @@ def test_astype_str(self):
expected = DataFrame(['1.12345678901'])
assert_frame_equal(result, expected)
+ def test_astype_dict(self):
+ # GH7271
+ a = Series(date_range('2010-01-04', periods=5))
+ b = Series(range(5))
+ c = Series([0.0, 0.2, 0.4, 0.6, 0.8])
+ d = Series(['1.0', '2', '3.14', '4', '5.4'])
+ df = DataFrame({'a': a, 'b': b, 'c': c, 'd': d})
+ original = df.copy(deep=True)
+
+ # change type of a subset of columns
+ result = df.astype({'b': 'str', 'd': 'float32'})
+ expected = DataFrame({
+ 'a': a,
+ 'b': Series(['0', '1', '2', '3', '4']),
+ 'c': c,
+ 'd': Series([1.0, 2.0, 3.14, 4.0, 5.4], dtype='float32')})
+ assert_frame_equal(result, expected)
+ assert_frame_equal(df, original)
+
+ result = df.astype({'b': np.float32, 'c': 'float32', 'd': np.float64})
+ expected = DataFrame({
+ 'a': a,
+ 'b': Series([0.0, 1.0, 2.0, 3.0, 4.0], dtype='float32'),
+ 'c': Series([0.0, 0.2, 0.4, 0.6, 0.8], dtype='float32'),
+ 'd': Series([1.0, 2.0, 3.14, 4.0, 5.4], dtype='float64')})
+ assert_frame_equal(result, expected)
+ assert_frame_equal(df, original)
+
+ # change all columns
+ assert_frame_equal(df.astype({'a': str, 'b': str, 'c': str, 'd': str}),
+ df.astype(str))
+ assert_frame_equal(df, original)
+
+ # error should be raised when using something other than column labels
+ # in the keys of the dtype dict
+ self.assertRaises(KeyError, df.astype, {'b': str, 2: str})
+ self.assertRaises(KeyError, df.astype, {'e': str})
+ assert_frame_equal(df, original)
+
+ # if the dtypes provided are the same as the original dtypes, the
+ # resulting DataFrame should be the same as the original DataFrame
+ equiv = df.astype({col: df[col].dtype for col in df.columns})
+ assert_frame_equal(df, equiv)
+ assert_frame_equal(df, original)
+
+ def test_astype_duplicate_col(self):
+ a1 = Series([1, 2, 3, 4, 5], name='a')
+ b = Series([0.1, 0.2, 0.4, 0.6, 0.8], name='b')
+ a2 = Series([0, 1, 2, 3, 4], name='a')
+ df = concat([a1, b, a2], axis=1)
+
+ result = df.astype(str)
+ a1_str = Series(['1', '2', '3', '4', '5'], dtype='str', name='a')
+ b_str = Series(['0.1', '0.2', '0.4', '0.6', '0.8'], dtype=str,
+ name='b')
+ a2_str = Series(['0', '1', '2', '3', '4'], dtype='str', name='a')
+ expected = concat([a1_str, b_str, a2_str], axis=1)
+ assert_frame_equal(result, expected)
+
+ result = df.astype({'a': 'str'})
+ expected = concat([a1_str, b, a2_str], axis=1)
+ assert_frame_equal(result, expected)
+
def test_timedeltas(self):
df = DataFrame(dict(A=Series(date_range('2012-1-1', periods=3,
freq='D')),
diff --git a/pandas/tests/series/test_dtypes.py b/pandas/tests/series/test_dtypes.py
index 6864eac603ded..9a406dfa10c35 100644
--- a/pandas/tests/series/test_dtypes.py
+++ b/pandas/tests/series/test_dtypes.py
@@ -133,6 +133,22 @@ def test_astype_unicode(self):
reload(sys) # noqa
sys.setdefaultencoding(former_encoding)
+ def test_astype_dict(self):
+ # GH7271
+ s = Series(range(0, 10, 2), name='abc')
+
+ result = s.astype({'abc': str})
+ expected = Series(['0', '2', '4', '6', '8'], name='abc')
+ assert_series_equal(result, expected)
+
+ result = s.astype({'abc': 'float64'})
+ expected = Series([0.0, 2.0, 4.0, 6.0, 8.0], dtype='float64',
+ name='abc')
+ assert_series_equal(result, expected)
+
+ self.assertRaises(KeyError, s.astype, {'abc': str, 'def': str})
+ self.assertRaises(KeyError, s.astype, {0: str})
+
def test_complexx(self):
# GH4819
# complex access for ndarray compat
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index f2e13867d3bf0..d9c7c1dc0dc62 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -1231,6 +1231,18 @@ def test_dtypes(self):
expected = Series(np.dtype('float64'), index=self.panel.items)
assert_series_equal(result, expected)
+ def test_astype(self):
+ # GH7271
+ data = np.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])
+ panel = Panel(data, ['a', 'b'], ['c', 'd'], ['e', 'f'])
+
+ str_data = np.array([[['1', '2'], ['3', '4']],
+ [['5', '6'], ['7', '8']]])
+ expected = Panel(str_data, ['a', 'b'], ['c', 'd'], ['e', 'f'])
+ assert_panel_equal(panel.astype(str), expected)
+
+ self.assertRaises(NotImplementedError, panel.astype, {0: str})
+
def test_apply(self):
# GH1148
| New PR for what was started in #12086.
closes #7271
By passing a dict of {column name/column index: dtype}, multiple columns can be cast to different data types in a single command. Now users can do:
`df = df.astype({'my_bool', 'bool', 'my_int': 'int'})`
or:
`df = df.astype({0, 'bool', 1: 'int'})`
instead of:
```
df['my_bool'] = df.my_bool.astype('bool')
df['my_int'] = df.my_int.astype('int')
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/13375 | 2016-06-06T00:05:25Z | 2016-07-20T22:04:50Z | null | 2016-07-20T22:04:54Z |
DEPR: Deprecate as_recarray in read_csv | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 6aa2df3549914..6802a448c4e14 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -134,6 +134,14 @@ usecols : array-like, default ``None``
inferred from the document header row(s). For example, a valid `usecols`
parameter would be [0, 1, 2] or ['foo', 'bar', 'baz']. Using this parameter
results in much faster parsing time and lower memory usage.
+as_recarray : boolean, default ``False``
+ DEPRECATED: this argument will be removed in a future version. Please call
+ ``pd.read_csv(...).to_records()`` instead.
+
+ Return a NumPy recarray instead of a DataFrame after parsing the data. If
+ set to ``True``, this option takes precedence over the ``squeeze`` parameter.
+ In addition, as row indices are not available in such a format, the ``index_col``
+ parameter will be ignored.
squeeze : boolean, default ``False``
If the parsed data only contains one column then return a Series.
prefix : str, default ``None``
@@ -179,9 +187,6 @@ low_memory : boolean, default ``True``
buffer_lines : int, default None
DEPRECATED: this argument will be removed in a future version because its
value is not respected by the parser
-
- If ``low_memory`` is ``True``, specify the number of rows to be read for
- each chunk. (Only valid with C parser)
compact_ints : boolean, default False
DEPRECATED: this argument will be removed in a future version
diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 93aedce07da9d..1e95af2df247b 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -295,6 +295,7 @@ Deprecations
- ``compact_ints`` and ``use_unsigned`` have been deprecated in ``pd.read_csv()`` and will be removed in a future version (:issue:`13320`)
- ``buffer_lines`` has been deprecated in ``pd.read_csv()`` and will be removed in a future version (:issue:`13360`)
+- ``as_recarray`` has been deprecated in ``pd.read_csv()`` and will be removed in a future version (:issue:`13373`)
.. _whatsnew_0182.performance:
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 04b488aff5c0c..0f0e1848750c0 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -2,7 +2,8 @@
Module contains tools for processing files into DataFrames or other objects
"""
from __future__ import print_function
-from pandas.compat import range, lrange, StringIO, lzip, zip, string_types, map
+from pandas.compat import (range, lrange, StringIO, lzip, zip,
+ string_types, map, OrderedDict)
from pandas import compat
from collections import defaultdict
import re
@@ -87,6 +88,14 @@
inferred from the document header row(s). For example, a valid `usecols`
parameter would be [0, 1, 2] or ['foo', 'bar', 'baz']. Using this parameter
results in much faster parsing time and lower memory usage.
+as_recarray : boolean, default False
+ DEPRECATED: this argument will be removed in a future version. Please call
+ `pd.read_csv(...).to_records()` instead.
+
+ Return a NumPy recarray instead of a DataFrame after parsing the data.
+ If set to True, this option takes precedence over the `squeeze` parameter.
+ In addition, as row indices are not available in such a format, the
+ `index_col` parameter will be ignored.
squeeze : boolean, default False
If the parsed data only contains one column then return a Series
prefix : str, default None
@@ -239,9 +248,6 @@
buffer_lines : int, default None
DEPRECATED: this argument will be removed in a future version because its
value is not respected by the parser
-
- If low_memory is True, specify the number of rows to be read for each
- chunk. (Only valid with C parser)
compact_ints : boolean, default False
DEPRECATED: this argument will be removed in a future version
@@ -452,7 +458,6 @@ def _read(filepath_or_buffer, kwds):
_c_unsupported = set(['skip_footer'])
_python_unsupported = set([
- 'as_recarray',
'low_memory',
'memory_map',
'buffer_lines',
@@ -462,6 +467,7 @@ def _read(filepath_or_buffer, kwds):
'float_precision',
])
_deprecated_args = set([
+ 'as_recarray',
'buffer_lines',
'compact_ints',
'use_unsigned',
@@ -820,12 +826,22 @@ def _clean_options(self, options, engine):
_validate_header_arg(options['header'])
+ depr_warning = ''
+
for arg in _deprecated_args:
parser_default = _c_parser_defaults[arg]
+ msg = ("The '{arg}' argument has been deprecated "
+ "and will be removed in a future version."
+ .format(arg=arg))
+
+ if arg == 'as_recarray':
+ msg += ' Please call pd.to_csv(...).to_records() instead.'
+
if result.get(arg, parser_default) != parser_default:
- warnings.warn("The '{arg}' argument has been deprecated "
- "and will be removed in a future version"
- .format(arg=arg), FutureWarning, stacklevel=2)
+ depr_warning += msg + '\n\n'
+
+ if depr_warning != '':
+ warnings.warn(depr_warning, FutureWarning, stacklevel=2)
if index_col is True:
raise ValueError("The value of index_col couldn't be 'True'")
@@ -973,6 +989,7 @@ def __init__(self, kwds):
self.na_fvalues = kwds.get('na_fvalues')
self.true_values = kwds.get('true_values')
self.false_values = kwds.get('false_values')
+ self.as_recarray = kwds.get('as_recarray', False)
self.tupleize_cols = kwds.get('tupleize_cols', False)
self.mangle_dupe_cols = kwds.get('mangle_dupe_cols', True)
self.infer_datetime_format = kwds.pop('infer_datetime_format', False)
@@ -1304,7 +1321,6 @@ def __init__(self, src, **kwds):
self.kwds = kwds
kwds = kwds.copy()
- self.as_recarray = kwds.get('as_recarray', False)
ParserBase.__init__(self, kwds)
if 'utf-16' in (kwds.get('encoding') or ''):
@@ -1889,6 +1905,9 @@ def read(self, rows=None):
columns, data = self._do_date_conversions(columns, data)
data = self._convert_data(data)
+ if self.as_recarray:
+ return self._to_recarray(data, columns)
+
index, columns = self._make_index(data, alldata, columns, indexnamerow)
return index, columns, data
@@ -1928,6 +1947,19 @@ def _convert_data(self, data):
return self._convert_to_ndarrays(data, self.na_values, self.na_fvalues,
self.verbose, clean_conv)
+ def _to_recarray(self, data, columns):
+ dtypes = []
+ o = OrderedDict()
+
+ # use the columns to "order" the keys
+ # in the unordered 'data' dictionary
+ for col in columns:
+ dtypes.append((str(col), data[col].dtype))
+ o[col] = data[col]
+
+ tuples = lzip(*o.values())
+ return np.array(tuples, dtypes)
+
def _infer_columns(self):
names = self.names
num_original_columns = 0
diff --git a/pandas/io/tests/parser/c_parser_only.py b/pandas/io/tests/parser/c_parser_only.py
index b7ef754004e18..90103064774c1 100644
--- a/pandas/io/tests/parser/c_parser_only.py
+++ b/pandas/io/tests/parser/c_parser_only.py
@@ -172,30 +172,6 @@ def error(val):
self.assertTrue(sum(precise_errors) <= sum(normal_errors))
self.assertTrue(max(precise_errors) <= max(normal_errors))
- def test_compact_ints_as_recarray(self):
- if compat.is_platform_windows():
- raise nose.SkipTest(
- "segfaults on win-64, only when all tests are run")
-
- data = ('0,1,0,0\n'
- '1,1,0,0\n'
- '0,1,0,1')
-
- with tm.assert_produces_warning(
- FutureWarning, check_stacklevel=False):
- result = self.read_csv(StringIO(data), delimiter=',', header=None,
- compact_ints=True, as_recarray=True)
- ex_dtype = np.dtype([(str(i), 'i1') for i in range(4)])
- self.assertEqual(result.dtype, ex_dtype)
-
- with tm.assert_produces_warning(
- FutureWarning, check_stacklevel=False):
- result = self.read_csv(StringIO(data), delimiter=',', header=None,
- as_recarray=True, compact_ints=True,
- use_unsigned=True)
- ex_dtype = np.dtype([(str(i), 'u1') for i in range(4)])
- self.assertEqual(result.dtype, ex_dtype)
-
def test_pass_dtype(self):
data = """\
one,two
@@ -220,10 +196,12 @@ def test_pass_dtype_as_recarray(self):
3,4.5
4,5.5"""
- result = self.read_csv(StringIO(data), dtype={'one': 'u1', 1: 'S1'},
- as_recarray=True)
- self.assertEqual(result['one'].dtype, 'u1')
- self.assertEqual(result['two'].dtype, 'S1')
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
+ result = self.read_csv(StringIO(data), dtype={
+ 'one': 'u1', 1: 'S1'}, as_recarray=True)
+ self.assertEqual(result['one'].dtype, 'u1')
+ self.assertEqual(result['two'].dtype, 'S1')
def test_empty_pass_dtype(self):
data = 'one,two'
diff --git a/pandas/io/tests/parser/common.py b/pandas/io/tests/parser/common.py
index f8c7241fdf88a..fdaac71f59386 100644
--- a/pandas/io/tests/parser/common.py
+++ b/pandas/io/tests/parser/common.py
@@ -608,10 +608,6 @@ def test_url(self):
@tm.slow
def test_file(self):
-
- # FILE
- if sys.version_info[:2] < (2, 6):
- raise nose.SkipTest("file:// not supported with Python < 2.6")
dirpath = tm.get_data_path()
localtable = os.path.join(dirpath, 'salary.table.csv')
local_table = self.read_table(localtable)
@@ -925,8 +921,8 @@ def test_empty_with_nrows_chunksize(self):
StringIO('foo,bar\n'), chunksize=10)))
tm.assert_frame_equal(result, expected)
- # 'as_recarray' is not supported yet for the Python parser
- if self.engine == 'c':
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
result = self.read_csv(StringIO('foo,bar\n'),
nrows=10, as_recarray=True)
result = DataFrame(result[2], columns=result[1],
@@ -934,11 +930,13 @@ def test_empty_with_nrows_chunksize(self):
tm.assert_frame_equal(DataFrame.from_records(
result), expected, check_index_type=False)
- result = next(iter(self.read_csv(
- StringIO('foo,bar\n'), chunksize=10, as_recarray=True)))
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
+ result = next(iter(self.read_csv(StringIO('foo,bar\n'),
+ chunksize=10, as_recarray=True)))
result = DataFrame(result[2], columns=result[1], index=result[0])
- tm.assert_frame_equal(DataFrame.from_records(
- result), expected, check_index_type=False)
+ tm.assert_frame_equal(DataFrame.from_records(result), expected,
+ check_index_type=False)
def test_eof_states(self):
# see gh-10728, gh-10548
@@ -1373,3 +1371,90 @@ def test_compact_ints_use_unsigned(self):
out = self.read_csv(StringIO(data), compact_ints=True,
use_unsigned=True)
tm.assert_frame_equal(out, expected)
+
+ def test_compact_ints_as_recarray(self):
+ data = ('0,1,0,0\n'
+ '1,1,0,0\n'
+ '0,1,0,1')
+
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
+ result = self.read_csv(StringIO(data), delimiter=',', header=None,
+ compact_ints=True, as_recarray=True)
+ ex_dtype = np.dtype([(str(i), 'i1') for i in range(4)])
+ self.assertEqual(result.dtype, ex_dtype)
+
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
+ result = self.read_csv(StringIO(data), delimiter=',', header=None,
+ as_recarray=True, compact_ints=True,
+ use_unsigned=True)
+ ex_dtype = np.dtype([(str(i), 'u1') for i in range(4)])
+ self.assertEqual(result.dtype, ex_dtype)
+
+ def test_as_recarray(self):
+ # basic test
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
+ data = 'a,b\n1,a\n2,b'
+ expected = np.array([(1, 'a'), (2, 'b')],
+ dtype=[('a', '<i8'), ('b', 'O')])
+ out = self.read_csv(StringIO(data), as_recarray=True)
+ tm.assert_numpy_array_equal(out, expected)
+
+ # index_col ignored
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
+ data = 'a,b\n1,a\n2,b'
+ expected = np.array([(1, 'a'), (2, 'b')],
+ dtype=[('a', '<i8'), ('b', 'O')])
+ out = self.read_csv(StringIO(data), as_recarray=True, index_col=0)
+ tm.assert_numpy_array_equal(out, expected)
+
+ # respects names
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
+ data = '1,a\n2,b'
+ expected = np.array([(1, 'a'), (2, 'b')],
+ dtype=[('a', '<i8'), ('b', 'O')])
+ out = self.read_csv(StringIO(data), names=['a', 'b'],
+ header=None, as_recarray=True)
+ tm.assert_numpy_array_equal(out, expected)
+
+ # header order is respected even though it conflicts
+ # with the natural ordering of the column names
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
+ data = 'b,a\n1,a\n2,b'
+ expected = np.array([(1, 'a'), (2, 'b')],
+ dtype=[('b', '<i8'), ('a', 'O')])
+ out = self.read_csv(StringIO(data), as_recarray=True)
+ tm.assert_numpy_array_equal(out, expected)
+
+ # overrides the squeeze parameter
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
+ data = 'a\n1'
+ expected = np.array([(1,)], dtype=[('a', '<i8')])
+ out = self.read_csv(StringIO(data), as_recarray=True, squeeze=True)
+ tm.assert_numpy_array_equal(out, expected)
+
+ # does data conversions before doing recarray conversion
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
+ data = 'a,b\n1,a\n2,b'
+ conv = lambda x: int(x) + 1
+ expected = np.array([(2, 'a'), (3, 'b')],
+ dtype=[('a', '<i8'), ('b', 'O')])
+ out = self.read_csv(StringIO(data), as_recarray=True,
+ converters={'a': conv})
+ tm.assert_numpy_array_equal(out, expected)
+
+ # filters by usecols before doing recarray conversion
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
+ data = 'a,b\n1,a\n2,b'
+ expected = np.array([(1,), (2,)], dtype=[('a', '<i8')])
+ out = self.read_csv(StringIO(data), as_recarray=True,
+ usecols=['a'])
+ tm.assert_numpy_array_equal(out, expected)
diff --git a/pandas/io/tests/parser/header.py b/pandas/io/tests/parser/header.py
index ca148b373d659..33a4d71fc03b6 100644
--- a/pandas/io/tests/parser/header.py
+++ b/pandas/io/tests/parser/header.py
@@ -115,10 +115,12 @@ def test_header_multi_index(self):
# INVALID OPTIONS
# no as_recarray
- self.assertRaises(ValueError, self.read_csv,
- StringIO(data), header=[0, 1, 2, 3],
- index_col=[0, 1], as_recarray=True,
- tupleize_cols=False)
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
+ self.assertRaises(ValueError, self.read_csv,
+ StringIO(data), header=[0, 1, 2, 3],
+ index_col=[0, 1], as_recarray=True,
+ tupleize_cols=False)
# names
self.assertRaises(ValueError, self.read_csv,
diff --git a/pandas/io/tests/parser/test_textreader.py b/pandas/io/tests/parser/test_textreader.py
index c35cfca7012d3..fd2f49cef656a 100644
--- a/pandas/io/tests/parser/test_textreader.py
+++ b/pandas/io/tests/parser/test_textreader.py
@@ -200,11 +200,6 @@ def test_header_not_enough_lines(self):
delimiter=',', header=5, as_recarray=True)
def test_header_not_enough_lines_as_recarray(self):
-
- if compat.is_platform_windows():
- raise nose.SkipTest(
- "segfaults on win-64, only when all tests are run")
-
data = ('skip this\n'
'skip this\n'
'a,b,c\n'
@@ -279,10 +274,6 @@ def test_numpy_string_dtype_as_recarray(self):
aaaa,4
aaaaa,5"""
- if compat.is_platform_windows():
- raise nose.SkipTest(
- "segfaults on win-64, only when all tests are run")
-
def _make_reader(**kwds):
return TextReader(StringIO(data), delimiter=',', header=None,
**kwds)
diff --git a/pandas/io/tests/parser/test_unsupported.py b/pandas/io/tests/parser/test_unsupported.py
index 97862ffa90cef..c8ad46af10795 100644
--- a/pandas/io/tests/parser/test_unsupported.py
+++ b/pandas/io/tests/parser/test_unsupported.py
@@ -124,6 +124,7 @@ def test_deprecated_args(self):
# deprecated arguments with non-default values
deprecated = {
+ 'as_recarray': True,
'buffer_lines': True,
'compact_ints': True,
'use_unsigned': True,
diff --git a/pandas/parser.pyx b/pandas/parser.pyx
index d7ddaee658fe7..063b2158d999a 100644
--- a/pandas/parser.pyx
+++ b/pandas/parser.pyx
@@ -809,7 +809,7 @@ cdef class TextReader:
if self.as_recarray:
self._start_clock()
- result = _to_structured_array(columns, self.header)
+ result = _to_structured_array(columns, self.header, self.usecols)
self._end_clock('Conversion to structured array')
return result
@@ -1965,7 +1965,7 @@ cdef _apply_converter(object f, parser_t *parser, int col,
return lib.maybe_convert_objects(result)
-def _to_structured_array(dict columns, object names):
+def _to_structured_array(dict columns, object names, object usecols):
cdef:
ndarray recs, column
cnp.dtype dt
@@ -1982,6 +1982,10 @@ def _to_structured_array(dict columns, object names):
# single line header
names = names[0]
+ if usecols is not None:
+ names = [n for i, n in enumerate(names)
+ if i in usecols or n in usecols]
+
dt = np.dtype([(str(name), columns[i].dtype)
for i, name in enumerate(names)])
fnames = dt.names
| 1) Documented and deprecate `as_recarray`
2) Added `as_recarray` functionality to Python engine
3) Fixed bug in C engine in which `usecols` was not being respected in combination with `as_recarray`
| https://api.github.com/repos/pandas-dev/pandas/pulls/13373 | 2016-06-05T22:02:12Z | 2016-06-06T23:14:50Z | null | 2016-06-06T23:16:54Z |
CLN: remove old skiplist code | diff --git a/pandas/algos.pyx b/pandas/algos.pyx
index a31b35ba4afc6..7884d9c41845c 100644
--- a/pandas/algos.pyx
+++ b/pandas/algos.pyx
@@ -1505,52 +1505,8 @@ def roll_kurt(ndarray[double_t] input,
#-------------------------------------------------------------------------------
# Rolling median, min, max
-ctypedef double_t (* skiplist_f)(object sl, int n, int p)
-
-cdef _roll_skiplist_op(ndarray arg, int win, int minp, skiplist_f op):
- cdef ndarray[double_t] input = arg
- cdef double val, prev, midpoint
- cdef IndexableSkiplist skiplist
- cdef Py_ssize_t nobs = 0, i
-
- cdef Py_ssize_t N = len(input)
- cdef ndarray[double_t] output = np.empty(N, dtype=float)
-
- skiplist = IndexableSkiplist(win)
-
- minp = _check_minp(win, minp, N)
-
- for i from 0 <= i < minp - 1:
- val = input[i]
-
- # Not NaN
- if val == val:
- nobs += 1
- skiplist.insert(val)
-
- output[i] = NaN
-
- for i from minp - 1 <= i < N:
- val = input[i]
-
- if i > win - 1:
- prev = input[i - win]
-
- if prev == prev:
- skiplist.remove(prev)
- nobs -= 1
-
- if val == val:
- nobs += 1
- skiplist.insert(val)
-
- output[i] = op(skiplist, nobs, minp)
-
- return output
-
from skiplist cimport *
-
@cython.boundscheck(False)
@cython.wraparound(False)
def roll_median_c(ndarray[float64_t] arg, int win, int minp):
| https://api.github.com/repos/pandas-dev/pandas/pulls/13372 | 2016-06-05T21:52:40Z | 2016-06-05T21:56:53Z | null | 2016-06-05T21:56:53Z |
|
Typo correction | diff --git a/ci/cron/go_doc.sh b/ci/cron/go_doc.sh
deleted file mode 100755
index 89659577d0e7f..0000000000000
--- a/ci/cron/go_doc.sh
+++ /dev/null
@@ -1,99 +0,0 @@
-#!/bin/bash
-
-# This is a one-command cron job for setting up
-# a virtualenv-based, linux-based, py2-based environment
-# for building the Pandas documentation.
-#
-# The first run will install all required deps from pypi
-# into the venv including monsters like scipy.
-# You may want to set it up yourself to speed up the
-# process.
-#
-# This is meant to be run as a cron job under a dedicated
-# user account whose HOME directory contains this script.
-# a CI directory will be created under it and all files
-# stored within it.
-#
-# The hardcoded dep versions will gradually become obsolete
-# You may need to tweak them
-#
-# @y-p, Jan/2014
-
-# disto latex is sometimes finicky. Optionall use
-# a local texlive install
-export PATH=/mnt/debian/texlive/2013/bin/x86_64-linux:$PATH
-
-# Having ccache will speed things up
-export PATH=/usr/lib64/ccache/:$PATH
-
-# limit disk usage
-ccache -M 200M
-
-BASEDIR="$HOME/CI"
-REPO_URL="https://github.com/pydata/pandas"
-REPO_LOC="$BASEDIR/pandas"
-
-if [ ! -d $BASEDIR ]; then
- mkdir -p $BASEDIR
- virtualenv $BASEDIR/venv
-fi
-
-source $BASEDIR/venv/bin/activate
-
-pip install numpy==1.7.2
-pip install cython==0.20.0
-pip install python-dateutil==2.2
-pip install --pre pytz==2013.9
-pip install sphinx==1.1.3
-pip install numexpr==2.2.2
-
-pip install matplotlib==1.3.0
-pip install lxml==3.2.5
-pip install beautifulsoup4==4.3.2
-pip install html5lib==0.99
-
-# You'll need R as well
-pip install rpy2==2.3.9
-
-pip install tables==3.0.0
-pip install bottleneck==0.7.0
-pip install ipython==0.13.2
-
-# only if you have too
-pip install scipy==0.13.2
-
-pip install openpyxl==1.6.2
-pip install xlrd==0.9.2
-pip install xlwt==0.7.5
-pip install xlsxwriter==0.5.1
-pip install sqlalchemy==0.8.3
-
-if [ ! -d "$REPO_LOC" ]; then
- git clone "$REPO_URL" "$REPO_LOC"
-fi
-
-cd "$REPO_LOC"
-git reset --hard
-git clean -df
-git checkout master
-git pull origin
-make
-
-source $BASEDIR/venv/bin/activate
-export PATH="/usr/lib64/ccache/:$PATH"
-pip uninstall pandas -yq
-pip install "$REPO_LOC"
-
-cd "$REPO_LOC"/doc
-
-python make.py clean
-python make.py html
-if [ ! $? == 0 ]; then
- exit 1
-fi
-python make.py zip_html
-# usually requires manual intervention
-# python make.py latex
-
-# If you have access:
-# python make.py upload_dev
| Corrected a small typo found when reviewing the script.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13369 | 2016-06-05T14:23:02Z | 2016-06-05T18:06:10Z | 2016-06-05T18:06:10Z | 2016-06-05T18:06:10Z |
DOC: document doublequote in read_csv | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 4eb42e1fb918d..79867d33c5838 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -273,6 +273,10 @@ quoting : int or ``csv.QUOTE_*`` instance, default ``None``
``QUOTE_MINIMAL`` (0), ``QUOTE_ALL`` (1), ``QUOTE_NONNUMERIC`` (2) or
``QUOTE_NONE`` (3). Default (``None``) results in ``QUOTE_MINIMAL``
behavior.
+doublequote : boolean, default ``True``
+ When ``quotechar`` is specified and ``quoting`` is not ``QUOTE_NONE``,
+ indicate whether or not to interpret two consecutive ``quotechar`` elements
+ **inside** a field as a single ``quotechar`` element.
escapechar : str (length 1), default ``None``
One-character string used to escape delimiter when quoting is ``QUOTE_NONE``.
comment : str, default ``None``
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 2c8726f588522..150e5ba5e1521 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -192,6 +192,10 @@
Control field quoting behavior per ``csv.QUOTE_*`` constants. Use one of
QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
Default (None) results in QUOTE_MINIMAL behavior.
+doublequote : boolean, default ``True``
+ When quotechar is specified and quoting is not ``QUOTE_NONE``, indicate
+ whether or not to interpret two consecutive quotechar elements INSIDE a
+ field as a single ``quotechar`` element.
escapechar : str (length 1), default None
One-character string used to escape delimiter when quoting is QUOTE_NONE.
comment : str, default None
| Title is self-explanatory.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13368 | 2016-06-05T07:48:22Z | 2016-06-05T13:55:07Z | null | 2016-06-05T20:06:27Z |
DOC: Fixed a minor typo | diff --git a/doc/README.rst b/doc/README.rst
index 06d95e6b9c44d..a93ad32a4c8f8 100644
--- a/doc/README.rst
+++ b/doc/README.rst
@@ -160,7 +160,7 @@ and `Good as first PR
<https://github.com/pydata/pandas/issues?labels=Good+as+first+PR&sort=updated&state=open>`_
where you could start out.
-Or maybe you have an idea of you own, by using pandas, looking for something
+Or maybe you have an idea of your own, by using pandas, looking for something
in the documentation and thinking 'this can be improved', let's do something
about that!
diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
index e64ff4c155132..a9b86925666b7 100644
--- a/doc/source/contributing.rst
+++ b/doc/source/contributing.rst
@@ -21,7 +21,7 @@ and `Difficulty Novice
<https://github.com/pydata/pandas/issues?q=is%3Aopen+is%3Aissue+label%3A%22Difficulty+Novice%22>`_
where you could start out.
-Or maybe through using *pandas* you have an idea of you own or are looking for something
+Or maybe through using *pandas* you have an idea of your own or are looking for something
in the documentation and thinking 'this can be improved'...you can do something
about it!
| It also seems that the texts of the `doc/README.rst` and the `doc/source/contributing.rst` have lots of overlap and duplication. Not sure if there are any plans in consolidating these two files but I thought to mention that here 😄
| https://api.github.com/repos/pandas-dev/pandas/pulls/13366 | 2016-06-04T23:19:03Z | 2016-06-05T13:51:02Z | null | 2016-06-07T02:02:56Z |
BUG: resample by BusinessHour raises ValueError | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 7493150370e9f..1a701850d205b 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -341,6 +341,7 @@ Bug Fixes
- Bug in ``.resample(..)`` with a ``PeriodIndex`` not retaining its type or name with an empty ``DataFrame``appropriately when empty (:issue:`13212`)
- Bug in ``groupby(..).resample(..)`` where passing some keywords would raise an exception (:issue:`13235`)
- Bug in ``.tz_convert`` on a tz-aware ``DateTimeIndex`` that relied on index being sorted for correct results (:issue: `13306`)
+- Bug in ``.resample(..)`` with a ``BusinessHour`` raises ``ValueError`` (:issue:`12351`)
diff --git a/pandas/tseries/resample.py b/pandas/tseries/resample.py
index 8d6955ab43711..451c589cddcd9 100644
--- a/pandas/tseries/resample.py
+++ b/pandas/tseries/resample.py
@@ -12,7 +12,8 @@
from pandas.tseries.frequencies import to_offset, is_subperiod, is_superperiod
from pandas.tseries.index import DatetimeIndex, date_range
from pandas.tseries.tdi import TimedeltaIndex
-from pandas.tseries.offsets import DateOffset, Tick, Day, _delta_to_nanoseconds
+from pandas.tseries.offsets import (DateOffset, Tick, Day, BusinessHour,
+ _delta_to_nanoseconds)
from pandas.tseries.period import PeriodIndex, period_range
import pandas.core.common as com
import pandas.core.algorithms as algos
@@ -1213,8 +1214,13 @@ def _get_range_edges(first, last, offset, closed='left', base=0):
if (is_day and day_nanos % offset.nanos == 0) or not is_day:
return _adjust_dates_anchored(first, last, offset,
closed=closed, base=base)
+ elif isinstance(offset, BusinessHour):
+ # GH12351 - normalize BH freq leads ValueError
+ first = Timestamp(offset.rollback(first))
+ last = Timestamp(offset.rollforward(last + offset))
+ return first, last
- if not isinstance(offset, Tick): # and first.time() != last.time():
+ else: # and first.time() != last.time():
# hack!
first = first.normalize()
last = last.normalize()
diff --git a/pandas/tseries/tests/test_resample.py b/pandas/tseries/tests/test_resample.py
index 2236d20975eee..c089938e05e94 100644
--- a/pandas/tseries/tests/test_resample.py
+++ b/pandas/tseries/tests/test_resample.py
@@ -2297,6 +2297,17 @@ def test_upsample_daily_business_daily(self):
expected = ts.asfreq('H', how='s').reindex(exp_rng)
assert_series_equal(result, expected)
+ def test_resample_hourly_business_hourly(self):
+ ts = pd.Series(index=pd.date_range(start='2016-06-01 03:00:00',
+ end='2016-06-03 23:00:00',
+ freq='H'))
+ expected = pd.Series(index=pd.date_range(start='2016-05-31 17:00:00',
+ end='2016-06-06 09:00:00',
+ freq='BH'))
+
+ result = ts.resample('BH').mean()
+ assert_series_equal(result, expected)
+
def test_resample_irregular_sparse(self):
dr = date_range(start='1/1/2012', freq='5min', periods=1000)
s = Series(np.array(100), index=dr)
| - [x] closes #12351
- [x] tests added / passed (`TestPeriodIndex.test_resample_hourly_business_hourly`)
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
I did this during pandas sprint at PyCon 2016. Hope this close #12351
Resampling with BusinessHour took much more consideration I guess.
Please review this. Any comments are welcome.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13364 | 2016-06-04T22:48:03Z | 2017-02-01T20:53:15Z | null | 2017-02-01T20:53:15Z |
BUG: df.pivot_table: margins_name is ignored when there aggfunc is li… | diff --git a/pandas/tools/pivot.py b/pandas/tools/pivot.py
index a4e6cc404a457..06f281a184621 100644
--- a/pandas/tools/pivot.py
+++ b/pandas/tools/pivot.py
@@ -86,7 +86,8 @@ def pivot_table(data, values=None, index=None, columns=None, aggfunc='mean',
table = pivot_table(data, values=values, index=index,
columns=columns,
fill_value=fill_value, aggfunc=func,
- margins=margins)
+ margins=margins,
+ dropna=dropna, margins_name=margins_name)
pieces.append(table)
keys.append(func.__name__)
return concat(pieces, keys=keys, axis=1)
| - [x ] closes #13354
- [ ] tests added / passed
- [ ] passes `git diff upstream/master | flake8 --diff`
- [ ] whatsnew entry
…st #13354
| https://api.github.com/repos/pandas-dev/pandas/pulls/13363 | 2016-06-04T22:34:36Z | 2016-11-16T22:22:58Z | null | 2016-11-16T22:22:58Z |
API/ENH: union Categorical | diff --git a/asv_bench/benchmarks/categoricals.py b/asv_bench/benchmarks/categoricals.py
index 244af3a577fe2..bf1e1b3f40ab0 100644
--- a/asv_bench/benchmarks/categoricals.py
+++ b/asv_bench/benchmarks/categoricals.py
@@ -1,4 +1,8 @@
from .pandas_vb_common import *
+try:
+ from pandas.types.concat import union_categoricals
+except ImportError:
+ pass
import string
@@ -12,6 +16,17 @@ def time_concat_categorical(self):
concat([self.s, self.s])
+class union_categorical(object):
+ goal_time = 0.2
+
+ def setup(self):
+ self.a = pd.Categorical((list('aabbcd') * 1000000))
+ self.b = pd.Categorical((list('bbcdjk') * 1000000))
+
+ def time_union_categorical(self):
+ union_categoricals([self.a, self.b])
+
+
class categorical_value_counts(object):
goal_time = 1
diff --git a/doc/source/categorical.rst b/doc/source/categorical.rst
index b518bc947c2da..e971f1f28903f 100644
--- a/doc/source/categorical.rst
+++ b/doc/source/categorical.rst
@@ -648,6 +648,31 @@ In this case the categories are not the same and so an error is raised:
The same applies to ``df.append(df_different)``.
+.. _categorical.union:
+
+Unioning
+~~~~~~~~
+
+.. versionadded:: 0.18.2
+
+If you want to combine categoricals that do not necessarily have
+the same categories, the `union_categorical` function will
+combine a list-like of categoricals. The new categories
+will be the union of the categories being combined.
+
+.. ipython:: python
+
+ from pandas.types.concat import union_categoricals
+ a = pd.Categorical(["b", "c"])
+ b = pd.Categorical(["a", "b"])
+ union_categoricals([a, b])
+
+.. note::
+
+ `union_categoricals` only works with unordered categoricals
+ and will raise if any are ordered.
+
+
Getting Data In/Out
-------------------
diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 7493150370e9f..c45a1704e228a 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -90,7 +90,7 @@ Other enhancements
- The ``DataFrame`` constructor will now respect key ordering if a list of ``OrderedDict`` objects are passed in (:issue:`13304`)
- ``pd.read_html()`` has gained support for the ``decimal`` option (:issue:`12907`)
-
+- A ``union_categorical`` function has been added for combining categoricals, see :ref:`Unioning Categoricals<categorical.union>` (:issue:`13361`)
- ``eval``'s upcasting rules for ``float32`` types have been updated to be more consistent with NumPy's rules. New behavior will not upcast to ``float64`` if you multiply a pandas ``float32`` object by a scalar float64. (:issue:`12388`)
- ``Series`` has gained the properties ``.is_monotonic``, ``.is_monotonic_increasing``, ``.is_monotonic_decreasing``, similar to ``Index`` (:issue:`13336`)
diff --git a/pandas/tools/tests/test_concat.py b/pandas/tools/tests/test_concat.py
index 9d9b0635e0f35..a8c86657a48cc 100644
--- a/pandas/tools/tests/test_concat.py
+++ b/pandas/tools/tests/test_concat.py
@@ -9,7 +9,8 @@
from pandas import (DataFrame, concat,
read_csv, isnull, Series, date_range,
Index, Panel, MultiIndex, Timestamp,
- DatetimeIndex)
+ DatetimeIndex, Categorical)
+from pandas.types.concat import union_categoricals
from pandas.util import testing as tm
from pandas.util.testing import (assert_frame_equal,
makeCustomDataframe as mkdf,
@@ -919,6 +920,54 @@ def test_concat_keys_with_none(self):
keys=['b', 'c', 'd', 'e'])
tm.assert_frame_equal(result, expected)
+ def test_union_categorical(self):
+ # GH 13361
+ data = [
+ (list('abc'), list('abd'), list('abcabd')),
+ ([0, 1, 2], [2, 3, 4], [0, 1, 2, 2, 3, 4]),
+ ([0, 1.2, 2], [2, 3.4, 4], [0, 1.2, 2, 2, 3.4, 4]),
+
+ (pd.date_range('2014-01-01', '2014-01-05'),
+ pd.date_range('2014-01-06', '2014-01-07'),
+ pd.date_range('2014-01-01', '2014-01-07')),
+
+ (pd.date_range('2014-01-01', '2014-01-05', tz='US/Central'),
+ pd.date_range('2014-01-06', '2014-01-07', tz='US/Central'),
+ pd.date_range('2014-01-01', '2014-01-07', tz='US/Central')),
+
+ (pd.period_range('2014-01-01', '2014-01-05'),
+ pd.period_range('2014-01-06', '2014-01-07'),
+ pd.period_range('2014-01-01', '2014-01-07')),
+ ]
+
+ for a, b, combined in data:
+ result = union_categoricals([Categorical(a), Categorical(b)])
+ expected = Categorical(combined)
+ tm.assert_categorical_equal(result, expected,
+ check_category_order=True)
+
+ # new categories ordered by appearance
+ s = Categorical(['x', 'y', 'z'])
+ s2 = Categorical(['a', 'b', 'c'])
+ result = union_categoricals([s, s2]).categories
+ expected = Index(['x', 'y', 'z', 'a', 'b', 'c'])
+ tm.assert_index_equal(result, expected)
+
+ # can't be ordered
+ s = Categorical([0, 1.2, 2], ordered=True)
+ s2 = Categorical([0, 1.2, 2], ordered=True)
+ with tm.assertRaises(TypeError):
+ union_categoricals([s, s2])
+
+ # must exactly match types
+ s = Categorical([0, 1.2, 2])
+ s2 = Categorical([2, 3, 4])
+ with tm.assertRaises(TypeError):
+ union_categoricals([s, s2])
+
+ with tm.assertRaises(ValueError):
+ union_categoricals([])
+
def test_concat_bug_1719(self):
ts1 = tm.makeTimeSeries()
ts2 = tm.makeTimeSeries()[::2]
diff --git a/pandas/types/concat.py b/pandas/types/concat.py
index 5cd7abb6889b7..53db9ddf79a5c 100644
--- a/pandas/types/concat.py
+++ b/pandas/types/concat.py
@@ -201,6 +201,57 @@ def convert_categorical(x):
return Categorical(concatted, rawcats)
+def union_categoricals(to_union):
+ """
+ Combine list-like of Categoricals, unioning categories. All
+ must have the same dtype, and none can be ordered.
+
+ .. versionadded 0.18.2
+
+ Parameters
+ ----------
+ to_union : list-like of Categoricals
+
+ Returns
+ -------
+ Categorical
+ A single array, categories will be ordered as they
+ appear in the list
+
+ Raises
+ ------
+ TypeError
+ If any of the categoricals are ordered or all do not
+ have the same dtype
+ ValueError
+ Emmpty list of categoricals passed
+ """
+ from pandas import Index, Categorical
+
+ if len(to_union) == 0:
+ raise ValueError('No Categoricals to union')
+
+ first = to_union[0]
+ if any(c.ordered for c in to_union):
+ raise TypeError("Can only combine unordered Categoricals")
+
+ if not all(com.is_dtype_equal(c.categories.dtype, first.categories.dtype)
+ for c in to_union):
+ raise TypeError("dtype of categories must be the same")
+
+ cats = first.categories
+ unique_cats = cats.append([c.categories for c in to_union[1:]]).unique()
+ categories = Index(unique_cats)
+
+ new_codes = []
+ for c in to_union:
+ indexer = categories.get_indexer(c.categories)
+ new_codes.append(indexer.take(c.codes))
+ codes = np.concatenate(new_codes)
+ return Categorical(codes, categories=categories, ordered=False,
+ fastpath=True)
+
+
def _concat_datetime(to_concat, axis=0, typs=None):
"""
provide concatenation of an datetimelike array of arrays each of which is a
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 03ccfcab24f58..d13873fcf2c84 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -963,14 +963,40 @@ def assertNotIsInstance(obj, cls, msg=''):
def assert_categorical_equal(left, right, check_dtype=True,
- obj='Categorical'):
+ obj='Categorical', check_category_order=True):
+ """Test that categoricals are eqivalent
+
+ Parameters
+ ----------
+ left, right : Categorical
+ Categoricals to compare
+ check_dtype : bool, default True
+ Check that integer dtype of the codes are the same
+ obj : str, default 'Categorical'
+ Specify object name being compared, internally used to show appropriate
+ assertion message
+ check_category_order : bool, default True
+ Whether the order of the categories should be compared, which
+ implies identical integer codes. If False, only the resulting
+ values are compared. The ordered attribute is
+ checked regardless.
+ """
assertIsInstance(left, pd.Categorical, '[Categorical] ')
assertIsInstance(right, pd.Categorical, '[Categorical] ')
- assert_index_equal(left.categories, right.categories,
- obj='{0}.categories'.format(obj))
- assert_numpy_array_equal(left.codes, right.codes, check_dtype=check_dtype,
- obj='{0}.codes'.format(obj))
+ if check_category_order:
+ assert_index_equal(left.categories, right.categories,
+ obj='{0}.categories'.format(obj))
+ assert_numpy_array_equal(left.codes, right.codes,
+ check_dtype=check_dtype,
+ obj='{0}.codes'.format(obj))
+ else:
+ assert_index_equal(left.categories.sort_values(),
+ right.categories.sort_values(),
+ obj='{0}.categories'.format(obj))
+ assert_index_equal(left.categories.take(left.codes),
+ right.categories.take(right.codes),
+ obj='{0}.values'.format(obj))
assert_attr_equal('ordered', left, right, obj=obj)
| I was looking into #10153 (parsing Categoricals directly) and one thing that seems to be needed
for that is a good way to combine Categoricals. That part alone is complicated enough
so I decided to do a separate PR for it.
This adds a `union_categoricals` function that takes a list of (identical dtyped, unordered)
and combines them without doing a full factorization again, unioning the categories.
This seems like it might be generically useful, does it belong the public api somewhere and/or
as a method a `Categorical`? Maybe as on option on `concat`?
Might also be useful for dask? e.g. https://github.com/dask/dask/issues/1040, cc @mrocklin
An example timing below. Obviously depends on the density of the categories,
but for relatively small numbers of categories, seems to generally be in 5-6x
speed up range vs concatting everything and re-categorizing.
``` python
from pandas.types.concat import union_categoricals
group1 = ['aaaaa', 'bbbbb', 'cccccc', 'ddddddd', 'eeeeeeee']
group2 = group1[3:] + ['fffffff', 'gggggggg', 'ddddddddd']
a = np.random.choice(group1, 1000000).astype('object')
b = np.random.choice(group2, 1000000).astype('object')
a_cat, b_cat = pd.Categorical(a), pd.Categorical(b)
In [10]: %timeit np.concatenate([a,b,a,b,a])
10 loops, best of 3: 82.7 ms per loop
In [12]: %timeit pd.Categorical(np.concatenate([a_cat.get_values(),b_cat.get_values(),
a_cat.get_values(),b_cat.get_values(),
a_cat.get_values()]))
1 loops, best of 3: 344 ms per loop
In [14]: %timeit union_categoricals([a_cat,b_cat,a_cat,b_cat,a_cat])
10 loops, best of 3: 40.9 ms per loop
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/13361 | 2016-06-04T12:03:11Z | 2016-06-08T11:32:41Z | null | 2017-03-17T01:04:23Z |
DEPR, DOC: Deprecate buffer_lines in read_csv | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 4eb42e1fb918d..cfc88d335f862 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -176,6 +176,12 @@ low_memory : boolean, default ``True``
Note that the entire file is read into a single DataFrame regardless,
use the ``chunksize`` or ``iterator`` parameter to return the data in chunks.
(Only valid with C parser)
+buffer_lines : int, default None
+ DEPRECATED: this argument will be removed in a future version because its
+ value is not respected by the parser
+
+ If ``low_memory`` is ``True``, specify the number of rows to be read for
+ each chunk. (Only valid with C parser)
compact_ints : boolean, default False
DEPRECATED: this argument will be removed in a future version
diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 7493150370e9f..191f5c1ccbf40 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -294,6 +294,7 @@ Deprecations
^^^^^^^^^^^^
- ``compact_ints`` and ``use_unsigned`` have been deprecated in ``pd.read_csv`` and will be removed in a future version (:issue:`13320`)
+- ``buffer_lines`` has been deprecated in ``pd.read_csv`` and will be removed in a future version (:issue:`13360`)
.. _whatsnew_0182.performance:
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 2c8726f588522..5936d256c6d2a 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -227,6 +227,12 @@
Note that the entire file is read into a single DataFrame regardless,
use the `chunksize` or `iterator` parameter to return the data in chunks.
(Only valid with C parser)
+buffer_lines : int, default None
+ DEPRECATED: this argument will be removed in a future version because its
+ value is not respected by the parser
+
+ If low_memory is True, specify the number of rows to be read for each
+ chunk. (Only valid with C parser)
compact_ints : boolean, default False
DEPRECATED: this argument will be removed in a future version
@@ -234,7 +240,6 @@
the parser will attempt to cast it as the smallest integer dtype possible,
either signed or unsigned depending on the specification from the
`use_unsigned` parameter.
-
use_unsigned : boolean, default False
DEPRECATED: this argument will be removed in a future version
@@ -448,6 +453,7 @@ def _read(filepath_or_buffer, kwds):
'float_precision',
])
_deprecated_args = set([
+ 'buffer_lines',
'compact_ints',
'use_unsigned',
])
@@ -806,7 +812,8 @@ def _clean_options(self, options, engine):
_validate_header_arg(options['header'])
for arg in _deprecated_args:
- if result[arg] != _c_parser_defaults[arg]:
+ parser_default = _c_parser_defaults[arg]
+ if result.get(arg, parser_default) != parser_default:
warnings.warn("The '{arg}' argument has been deprecated "
"and will be removed in a future version"
.format(arg=arg), FutureWarning, stacklevel=2)
diff --git a/pandas/io/tests/parser/test_parsers.py b/pandas/io/tests/parser/test_parsers.py
index ea8ce9b616f36..fda7b28769647 100644
--- a/pandas/io/tests/parser/test_parsers.py
+++ b/pandas/io/tests/parser/test_parsers.py
@@ -72,14 +72,12 @@ def read_csv(self, *args, **kwds):
kwds = kwds.copy()
kwds['engine'] = self.engine
kwds['low_memory'] = self.low_memory
- kwds['buffer_lines'] = 2
return read_csv(*args, **kwds)
def read_table(self, *args, **kwds):
kwds = kwds.copy()
kwds['engine'] = self.engine
kwds['low_memory'] = True
- kwds['buffer_lines'] = 2
return read_table(*args, **kwds)
diff --git a/pandas/io/tests/parser/test_unsupported.py b/pandas/io/tests/parser/test_unsupported.py
index e820924d2be8b..97862ffa90cef 100644
--- a/pandas/io/tests/parser/test_unsupported.py
+++ b/pandas/io/tests/parser/test_unsupported.py
@@ -124,6 +124,7 @@ def test_deprecated_args(self):
# deprecated arguments with non-default values
deprecated = {
+ 'buffer_lines': True,
'compact_ints': True,
'use_unsigned': True,
}
@@ -132,6 +133,10 @@ def test_deprecated_args(self):
for engine in engines:
for arg, non_default_val in deprecated.items():
+ if engine == 'python' and arg == 'buffer_lines':
+ # unsupported --> exception is raised first
+ continue
+
with tm.assert_produces_warning(
FutureWarning, check_stacklevel=False):
kwargs = {arg: non_default_val}
| `buffer_lines` is not respected, as it is determined internally via a heuristic involving `table_width` (see <a href="https://github.com/pydata/pandas/blob/master/pandas/parser.pyx#L527">here</a> for how it is computed).
| https://api.github.com/repos/pandas-dev/pandas/pulls/13360 | 2016-06-04T02:33:30Z | 2016-06-05T13:57:52Z | null | 2016-06-06T13:24:05Z |
Make pd.read_hdf('data.h5') work when pandas object stored contained categorical columns | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 950bf397f43b5..7cf27d13a44ac 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -374,3 +374,6 @@ Bug Fixes
- Bug in ``Categorical.remove_unused_categories()`` changes ``.codes`` dtype to platform int (:issue:`13261`)
+
+- Bug in ``pd.read_hdf()`` where attempting to load an HDF file with a single dataset (that had one or more categorical columns) failed unless the key argument was set to the name of the dataset. (:issue:`13231`)
+
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index fcf5125d956c6..6c7623ec7ed4a 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -331,11 +331,20 @@ def read_hdf(path_or_buf, key=None, **kwargs):
try:
if key is None:
- keys = store.keys()
- if len(keys) != 1:
- raise ValueError('key must be provided when HDF file contains '
- 'multiple datasets.')
- key = keys[0]
+ groups = store.groups()
+ if len(groups) == 0:
+ raise ValueError('No dataset in HDF file.')
+ candidate_only_group = groups[0]
+
+ # For the HDF file to have only one dataset, all other groups
+ # should then be metadata groups for that candidate group. (This
+ # assumes that the groups() method enumerates parent groups
+ # before their children.)
+ for group_to_check in groups[1:]:
+ if not _is_metadata_of(group_to_check, candidate_only_group):
+ raise ValueError('key must be provided when HDF file '
+ 'contains multiple datasets.')
+ key = candidate_only_group._v_pathname
return store.select(key, auto_close=auto_close, **kwargs)
except:
# if there is an error, close the store
@@ -347,6 +356,20 @@ def read_hdf(path_or_buf, key=None, **kwargs):
raise
+def _is_metadata_of(group, parent_group):
+ """Check if a given group is a metadata group for a given parent_group."""
+ if group._v_depth <= parent_group._v_depth:
+ return False
+
+ current = group
+ while current._v_depth > 1:
+ parent = current._v_parent
+ if parent == parent_group and current._v_name == 'meta':
+ return True
+ current = current._v_parent
+ return False
+
+
class HDFStore(StringMixin):
"""
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 96b66265ea586..9c13162bd774c 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -46,8 +46,8 @@
from distutils.version import LooseVersion
-_default_compressor = LooseVersion(tables.__version__) >= '2.2' \
- and 'blosc' or 'zlib'
+_default_compressor = ('blosc' if LooseVersion(tables.__version__) >= '2.2'
+ else 'zlib')
_multiprocess_can_split_ = False
@@ -4877,6 +4877,9 @@ def test_read_nokey(self):
df = DataFrame(np.random.rand(4, 5),
index=list('abcd'),
columns=list('ABCDE'))
+
+ # Categorical dtype not supported for "fixed" format. So no need
+ # to test with that dtype in the dataframe here.
with ensure_clean_path(self.path) as path:
df.to_hdf(path, 'df', mode='a')
reread = read_hdf(path)
@@ -4884,6 +4887,24 @@ def test_read_nokey(self):
df.to_hdf(path, 'df2', mode='a')
self.assertRaises(ValueError, read_hdf, path)
+ def test_read_nokey_table(self):
+ # GH13231
+ df = DataFrame({'i': range(5),
+ 'c': Series(list('abacd'), dtype='category')})
+
+ with ensure_clean_path(self.path) as path:
+ df.to_hdf(path, 'df', mode='a', format='table')
+ reread = read_hdf(path)
+ assert_frame_equal(df, reread)
+ df.to_hdf(path, 'df2', mode='a', format='table')
+ self.assertRaises(ValueError, read_hdf, path)
+
+ def test_read_nokey_empty(self):
+ with ensure_clean_path(self.path) as path:
+ store = HDFStore(path)
+ store.close()
+ self.assertRaises(ValueError, read_hdf, path)
+
def test_read_from_pathlib_path(self):
# GH11773
| - [x] closes #13231
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/13359 | 2016-06-03T23:07:02Z | 2016-06-05T14:08:12Z | null | 2016-06-05T18:41:10Z |
ENH: add pd.asof_merge | diff --git a/doc/source/api.rst b/doc/source/api.rst
index 0e893308dd935..0dde341d820e3 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -151,6 +151,8 @@ Data manipulations
cut
qcut
merge
+ merge_ordered
+ merge_asof
concat
get_dummies
factorize
@@ -943,6 +945,7 @@ Time series-related
:toctree: generated/
DataFrame.asfreq
+ DataFrame.asof
DataFrame.shift
DataFrame.first_valid_index
DataFrame.last_valid_index
diff --git a/doc/source/merging.rst b/doc/source/merging.rst
index ba675d9aac830..74871fe68fc08 100644
--- a/doc/source/merging.rst
+++ b/doc/source/merging.rst
@@ -104,7 +104,7 @@ some configurable handling of "what to do with the other axes":
- ``ignore_index`` : boolean, default False. If True, do not use the index
values on the concatenation axis. The resulting axis will be labeled 0, ...,
n - 1. This is useful if you are concatenating objects where the
- concatenation axis does not have meaningful indexing information. Note
+ concatenation axis does not have meaningful indexing information. Note
the index values on the other axes are still respected in the join.
- ``copy`` : boolean, default True. If False, do not copy data unnecessarily.
@@ -544,12 +544,12 @@ Here's a description of what each argument is for:
can be avoided are somewhat pathological but this option is provided
nonetheless.
- ``indicator``: Add a column to the output DataFrame called ``_merge``
- with information on the source of each row. ``_merge`` is Categorical-type
- and takes on a value of ``left_only`` for observations whose merge key
- only appears in ``'left'`` DataFrame, ``right_only`` for observations whose
- merge key only appears in ``'right'`` DataFrame, and ``both`` if the
- observation's merge key is found in both.
-
+ with information on the source of each row. ``_merge`` is Categorical-type
+ and takes on a value of ``left_only`` for observations whose merge key
+ only appears in ``'left'`` DataFrame, ``right_only`` for observations whose
+ merge key only appears in ``'right'`` DataFrame, and ``both`` if the
+ observation's merge key is found in both.
+
.. versionadded:: 0.17.0
@@ -718,7 +718,7 @@ The merge indicator
df2 = DataFrame({'col1':[1,2,2],'col_right':[2,2,2]})
merge(df1, df2, on='col1', how='outer', indicator=True)
-The ``indicator`` argument will also accept string arguments, in which case the indicator function will use the value of the passed string as the name for the indicator column.
+The ``indicator`` argument will also accept string arguments, in which case the indicator function will use the value of the passed string as the name for the indicator column.
.. ipython:: python
@@ -1055,34 +1055,6 @@ them together on their indexes. The same is true for ``Panel.join``.
labels=['left', 'right', 'right2'], vertical=False);
plt.close('all');
-.. _merging.ordered_merge:
-
-Merging Ordered Data
-~~~~~~~~~~~~~~~~~~~~
-
-New in v0.8.0 is the ordered_merge function for combining time series and other
-ordered data. In particular it has an optional ``fill_method`` keyword to
-fill/interpolate missing data:
-
-.. ipython:: python
-
- left = DataFrame({'k': ['K0', 'K1', 'K1', 'K2'],
- 'lv': [1, 2, 3, 4],
- 's': ['a', 'b', 'c', 'd']})
-
- right = DataFrame({'k': ['K1', 'K2', 'K4'],
- 'rv': [1, 2, 3]})
-
- result = ordered_merge(left, right, fill_method='ffill', left_by='s')
-
-.. ipython:: python
- :suppress:
-
- @savefig merging_ordered_merge.png
- p.plot([left, right], result,
- labels=['left', 'right'], vertical=True);
- plt.close('all');
-
.. _merging.combine_first.update:
Merging together values within Series or DataFrame columns
@@ -1132,4 +1104,124 @@ values inplace:
@savefig merging_update.png
p.plot([df1_copy, df2], df1,
labels=['df1', 'df2'], vertical=False);
- plt.close('all');
\ No newline at end of file
+ plt.close('all');
+
+.. _merging.time_series:
+
+Timeseries friendly merging
+---------------------------
+
+.. _merging.merge_ordered:
+
+Merging Ordered Data
+~~~~~~~~~~~~~~~~~~~~
+
+The ``pd.merge_ordered()`` function allows combining time series and other
+ordered data. In particular it has an optional ``fill_method`` keyword to
+fill/interpolate missing data:
+
+.. ipython:: python
+
+ left = DataFrame({'k': ['K0', 'K1', 'K1', 'K2'],
+ 'lv': [1, 2, 3, 4],
+ 's': ['a', 'b', 'c', 'd']})
+
+ right = DataFrame({'k': ['K1', 'K2', 'K4'],
+ 'rv': [1, 2, 3]})
+
+ result = pd.merge_ordered(left, right, fill_method='ffill', left_by='s')
+
+.. ipython:: python
+ :suppress:
+
+ @savefig merging_ordered_merge.png
+ p.plot([left, right], result,
+ labels=['left', 'right'], vertical=True);
+ plt.close('all');
+
+.. _merging.merge_asof:
+
+Merging AsOf
+~~~~~~~~~~~~
+
+.. versionadded:: 0.18.2
+
+An ``pd.merge_asof()`` this is similar to an ordered left-join except that we
+match on nearest key rather than equal keys.
+
+For each row in the ``left`` DataFrame, we select the last row in the ``right``
+DataFrame whose ``on`` key is less than the left's key. Both DataFrames must
+be sorted by the key.
+
+Optionally an asof merge can perform a group-wise merge. This matches the ``by`` key equally,
+in addition to the nearest match on the ``on`` key.
+
+For example; we might have ``trades`` and ``quotes`` and we want to ``asof`` merge them.
+
+.. ipython:: python
+
+ trades = pd.DataFrame({
+ 'time': pd.to_datetime(['20160525 13:30:00.023',
+ '20160525 13:30:00.038',
+ '20160525 13:30:00.048',
+ '20160525 13:30:00.048',
+ '20160525 13:30:00.048']),
+ 'ticker': ['MSFT', 'MSFT',
+ 'GOOG', 'GOOG', 'AAPL'],
+ 'price': [51.95, 51.95,
+ 720.77, 720.92, 98.00],
+ 'quantity': [75, 155,
+ 100, 100, 100]},
+ columns=['time', 'ticker', 'price', 'quantity'])
+
+ quotes = pd.DataFrame({
+ 'time': pd.to_datetime(['20160525 13:30:00.023',
+ '20160525 13:30:00.023',
+ '20160525 13:30:00.030',
+ '20160525 13:30:00.041',
+ '20160525 13:30:00.048',
+ '20160525 13:30:00.049',
+ '20160525 13:30:00.072',
+ '20160525 13:30:00.075']),
+ 'ticker': ['GOOG', 'MSFT', 'MSFT',
+ 'MSFT', 'GOOG', 'AAPL', 'GOOG',
+ 'MSFT'],
+ 'bid': [720.50, 51.95, 51.97, 51.99,
+ 720.50, 97.99, 720.50, 52.01],
+ 'ask': [720.93, 51.96, 51.98, 52.00,
+ 720.93, 98.01, 720.88, 52.03]},
+ columns=['time', 'ticker', 'bid', 'ask'])
+
+.. ipython:: python
+
+ trades
+ quotes
+
+By default we are taking the asof of the quotes.
+
+.. ipython:: python
+
+ pd.merge_asof(trades, quotes,
+ on='time',
+ by='ticker')
+
+We only asof within ``2ms`` betwen the quote time and the trade time.
+
+.. ipython:: python
+
+ pd.merge_asof(trades, quotes,
+ on='time',
+ by='ticker',
+ tolerance=pd.Timedelta('2ms'))
+
+We only asof within ``10ms`` betwen the quote time and the trade time and we exclude exact matches on time.
+Note that though we exclude the exact matches (of the quotes), prior quotes DO propogate to that point
+in time.
+
+.. ipython:: python
+
+ pd.merge_asof(trades, quotes,
+ on='time',
+ by='ticker',
+ tolerance=pd.Timedelta('10ms'),
+ allow_exact_matches=False)
diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 0d4f07d19f880..cd436aa18a68b 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -19,6 +19,97 @@ Highlights include:
New features
~~~~~~~~~~~~
+.. _whatsnew_0182.enhancements.asof_merge:
+
+``pd.merge_asof()`` for asof-style time-series joining
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A long-time requested feature has been added through the :func:`merge_asof` function, to
+support asof style joining of time-series. (:issue:`1870`). Full documentation is
+:ref:`here <merging.merge_asof>`
+
+The :func:`merge_asof`` performs an asof merge, which is similar to a left-join
+except that we match on nearest key rather than equal keys.
+
+.. ipython:: python
+
+ left = pd.DataFrame({'a': [1, 5, 10],
+ 'left_val': ['a', 'b', 'c']})
+ right = pd.DataFrame({'a': [1, 2, 3, 6, 7],
+ 'right_val': [1, 2, 3, 6, 7]})
+
+ left
+ right
+
+We typically want to match exactly when possible, and use the most
+recent value otherwise.
+
+.. ipython:: python
+
+ pd.merge_asof(left, right, on='a')
+
+We can also match rows ONLY with prior data, and not an exact match.
+
+.. ipython:: python
+
+ pd.merge_asof(left, right, on='a', allow_exact_matches=False)
+
+
+In a typical time-series example, we have ``trades`` and ``quotes`` and we want to ``asof-join`` them.
+This also illustrates using the ``by`` parameter to group data before merging.
+
+.. ipython:: python
+
+ trades = pd.DataFrame({
+ 'time': pd.to_datetime(['20160525 13:30:00.023',
+ '20160525 13:30:00.038',
+ '20160525 13:30:00.048',
+ '20160525 13:30:00.048',
+ '20160525 13:30:00.048']),
+ 'ticker': ['MSFT', 'MSFT',
+ 'GOOG', 'GOOG', 'AAPL'],
+ 'price': [51.95, 51.95,
+ 720.77, 720.92, 98.00],
+ 'quantity': [75, 155,
+ 100, 100, 100]},
+ columns=['time', 'ticker', 'price', 'quantity'])
+
+ quotes = pd.DataFrame({
+ 'time': pd.to_datetime(['20160525 13:30:00.023',
+ '20160525 13:30:00.023',
+ '20160525 13:30:00.030',
+ '20160525 13:30:00.041',
+ '20160525 13:30:00.048',
+ '20160525 13:30:00.049',
+ '20160525 13:30:00.072',
+ '20160525 13:30:00.075']),
+ 'ticker': ['GOOG', 'MSFT', 'MSFT',
+ 'MSFT', 'GOOG', 'AAPL', 'GOOG',
+ 'MSFT'],
+ 'bid': [720.50, 51.95, 51.97, 51.99,
+ 720.50, 97.99, 720.50, 52.01],
+ 'ask': [720.93, 51.96, 51.98, 52.00,
+ 720.93, 98.01, 720.88, 52.03]},
+ columns=['time', 'ticker', 'bid', 'ask'])
+
+.. ipython:: python
+
+ trades
+ quotes
+
+An asof merge joins on the ``on``, typically a datetimelike field, which is ordered, and
+in this case we are using a grouper in the ``by`` field. This is like a left-outer join, except
+that forward filling happens automatically taking the most recent non-NaN value.
+
+.. ipython:: python
+
+ pd.merge_asof(trades, quotes,
+ on='time',
+ by='ticker')
+
+This returns a merged DataFrame with the entries in the same order as the original left
+passed DataFrame (``trades`` in this case). With the fields of the ``quotes`` merged.
+
.. _whatsnew_0182.enhancements.read_csv_dupe_col_names_support:
``pd.read_csv`` has improved support for duplicate column names
@@ -124,8 +215,8 @@ Other enhancements
idx.where([True, False, True])
- ``Categorical.astype()`` now accepts an optional boolean argument ``copy``, effective when dtype is categorical (:issue:`13209`)
+- ``DataFrame`` has gained the ``.asof()`` method to return the last non-NaN values according to the selected subset (:issue:`13358`)
- Consistent with the Python API, ``pd.read_csv()`` will now interpret ``+inf`` as positive infinity (:issue:`13274`)
-
- The ``DataFrame`` constructor will now respect key ordering if a list of ``OrderedDict`` objects are passed in (:issue:`13304`)
- ``pd.read_html()`` has gained support for the ``decimal`` option (:issue:`12907`)
- A ``union_categorical`` function has been added for combining categoricals, see :ref:`Unioning Categoricals<categorical.union>` (:issue:`13361`)
@@ -335,6 +426,7 @@ Deprecations
- ``compact_ints`` and ``use_unsigned`` have been deprecated in ``pd.read_csv()`` and will be removed in a future version (:issue:`13320`)
- ``buffer_lines`` has been deprecated in ``pd.read_csv()`` and will be removed in a future version (:issue:`13360`)
- ``as_recarray`` has been deprecated in ``pd.read_csv()`` and will be removed in a future version (:issue:`13373`)
+- top-level ``pd.ordered_merge()`` has been renamed to ``pd.merge_ordered()`` and the original name will be removed in a future version (:issue:`13358`)
.. _whatsnew_0182.performance:
diff --git a/pandas/__init__.py b/pandas/__init__.py
index 53642fdcfeb31..350898c9925e7 100644
--- a/pandas/__init__.py
+++ b/pandas/__init__.py
@@ -43,7 +43,8 @@
from pandas.io.api import *
from pandas.computation.api import *
-from pandas.tools.merge import merge, concat, ordered_merge
+from pandas.tools.merge import (merge, concat, ordered_merge,
+ merge_ordered, merge_asof)
from pandas.tools.pivot import pivot_table, crosstab
from pandas.tools.plotting import scatter_matrix, plot_params
from pandas.tools.tile import cut, qcut
diff --git a/pandas/algos.pyx b/pandas/algos.pyx
index f1fd0204e2fd2..8e659a8566adb 100644
--- a/pandas/algos.pyx
+++ b/pandas/algos.pyx
@@ -1,3 +1,5 @@
+# cython: profile=False
+
from numpy cimport *
cimport numpy as np
import numpy as np
@@ -982,21 +984,35 @@ def is_lexsorted(list list_of_arrays):
@cython.boundscheck(False)
-def groupby_indices(ndarray values):
+def groupby_indices(dict ids, ndarray[int64_t] labels, ndarray[int64_t] counts):
+ """
+ turn group_labels output into a combined indexer maping the labels to
+ indexers
+
+ Parameters
+ ----------
+ ids: dict
+ mapping of label -> group indexer
+ labels: ndarray
+ labels for positions
+ counts: ndarray
+ group counts
+
+ Returns
+ -------
+ list of ndarrays of indices
+
+ """
cdef:
- Py_ssize_t i, n = len(values)
- ndarray[int64_t] labels, counts, arr, seen
+ Py_ssize_t i, n = len(labels)
+ ndarray[int64_t] arr, seen
int64_t loc
- dict ids = {}
- object val
int64_t k
+ dict result = {}
- ids, labels, counts = group_labels(values)
seen = np.zeros_like(counts)
- # try not to get in trouble here...
cdef int64_t **vecs = <int64_t **> malloc(len(ids) * sizeof(int64_t*))
- result = {}
for i from 0 <= i < len(counts):
arr = np.empty(counts[i], dtype=np.int64)
result[ids[i]] = arr
@@ -1014,7 +1030,6 @@ def groupby_indices(ndarray values):
seen[k] = loc + 1
free(vecs)
-
return result
@cython.wraparound(False)
@@ -1023,8 +1038,15 @@ def group_labels(ndarray[object] values):
"""
Compute label vector from input values and associated useful data
+ Parameters
+ ----------
+ values: object ndarray
+
Returns
-------
+ tuple of (reverse mappings of label -> group indexer,
+ factorized labels ndarray,
+ group counts ndarray)
"""
cdef:
Py_ssize_t i, n = len(values)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 69def7502a6f7..b4b35953b4282 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -153,6 +153,12 @@
merged : DataFrame
The output type will the be same as 'left', if it is a subclass
of DataFrame.
+
+See also
+--------
+merge_ordered
+merge_asof
+
"""
# -----------------------------------------------------------------------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 0852c5a293f4e..348281d1a7e30 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -13,7 +13,7 @@
InvalidIndexError)
import pandas.core.indexing as indexing
from pandas.tseries.index import DatetimeIndex
-from pandas.tseries.period import PeriodIndex
+from pandas.tseries.period import PeriodIndex, Period
from pandas.core.internals import BlockManager
import pandas.core.algorithms as algos
import pandas.core.common as com
@@ -3629,6 +3629,93 @@ def interpolate(self, method='linear', axis=0, limit=None, inplace=False,
res = res.T
return res
+ # ----------------------------------------------------------------------
+ # Timeseries methods Methods
+
+ def asof(self, where, subset=None):
+ """
+ The last row without any NaN is taken (or the last row without
+ NaN considering only the subset of columns in the case of a DataFrame)
+
+ .. versionadded:: 0.18.2 For DataFrame
+
+ If there is no good value, NaN is returned.
+
+ Parameters
+ ----------
+ where : date or array of dates
+ subset : string or list of strings, default None
+ if not None use these columns for NaN propagation
+
+ Notes
+ -----
+ Dates are assumed to be sorted
+ Raises if this is not the case
+
+ Returns
+ -------
+ where is scalar
+
+ - value or NaN if input is Series
+ - Series if input is DataFrame
+
+ where is Index: same shape object as input
+
+ See Also
+ --------
+ merge_asof
+
+ """
+
+ if isinstance(where, compat.string_types):
+ from pandas import to_datetime
+ where = to_datetime(where)
+
+ if not self.index.is_monotonic:
+ raise ValueError("asof requires a sorted index")
+
+ if isinstance(self, ABCSeries):
+ if subset is not None:
+ raise ValueError("subset is not valid for Series")
+ nulls = self.isnull()
+ elif self.ndim > 2:
+ raise NotImplementedError("asof is not implemented "
+ "for {type}".format(type(self)))
+ else:
+ if subset is None:
+ subset = self.columns
+ if not is_list_like(subset):
+ subset = [subset]
+ nulls = self[subset].isnull().any(1)
+
+ if not is_list_like(where):
+ start = self.index[0]
+ if isinstance(self.index, PeriodIndex):
+ where = Period(where, freq=self.index.freq).ordinal
+ start = start.ordinal
+
+ if where < start:
+ return np.nan
+
+ loc = self.index.searchsorted(where, side='right')
+ if loc > 0:
+ loc -= 1
+ while nulls[loc] and loc > 0:
+ loc -= 1
+ return self.iloc[loc]
+
+ if not isinstance(where, Index):
+ where = Index(where)
+
+ locs = self.index.asof_locs(where, ~(nulls.values))
+
+ # mask the missing
+ missing = locs == -1
+ data = self.take(locs, is_copy=False)
+ data.index = where
+ data.loc[missing] = np.nan
+ return data
+
# ----------------------------------------------------------------------
# Action Methods
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index bea62e98e4a2a..cc639b562dab8 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -4329,8 +4329,19 @@ def _reorder_by_uniques(uniques, labels):
def _groupby_indices(values):
- return _algos.groupby_indices(_values_from_object(
- com._ensure_object(values)))
+
+ if is_categorical_dtype(values):
+
+ # we have a categorical, so we can do quite a bit
+ # bit better than factorizing again
+ reverse = dict(enumerate(values.categories))
+ codes = values.codes.astype('int64')
+ _, counts = _hash.value_count_scalar64(codes, False)
+ else:
+ reverse, codes, counts = _algos.group_labels(
+ _values_from_object(com._ensure_object(values)))
+
+ return _algos.groupby_indices(reverse, codes, counts)
def numpy_groupby(data, labels, axis=0):
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 43b4ba3a51212..cf1639bacc3be 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -36,7 +36,7 @@
CombinedDatetimelikeProperties)
from pandas.tseries.index import DatetimeIndex
from pandas.tseries.tdi import TimedeltaIndex
-from pandas.tseries.period import PeriodIndex, Period
+from pandas.tseries.period import PeriodIndex
from pandas import compat
from pandas.util.terminal import get_terminal_size
from pandas.compat import zip, u, OrderedDict, StringIO
@@ -46,7 +46,6 @@
import pandas.core.algorithms as algos
import pandas.core.common as com
-import pandas.core.datetools as datetools
import pandas.core.nanops as nanops
import pandas.formats.format as fmt
from pandas.util.decorators import Appender, deprecate_kwarg, Substitution
@@ -2601,52 +2600,6 @@ def last_valid_index(self):
# ----------------------------------------------------------------------
# Time series-oriented methods
- def asof(self, where):
- """
- Return last good (non-NaN) value in Series if value is NaN for
- requested date.
-
- If there is no good value, NaN is returned.
-
- Parameters
- ----------
- where : date or array of dates
-
- Notes
- -----
- Dates are assumed to be sorted
-
- Returns
- -------
- value or NaN
- """
- if isinstance(where, compat.string_types):
- where = datetools.to_datetime(where)
-
- values = self._values
-
- if not hasattr(where, '__iter__'):
- start = self.index[0]
- if isinstance(self.index, PeriodIndex):
- where = Period(where, freq=self.index.freq).ordinal
- start = start.ordinal
-
- if where < start:
- return np.nan
- loc = self.index.searchsorted(where, side='right')
- if loc > 0:
- loc -= 1
- while isnull(values[loc]) and loc > 0:
- loc -= 1
- return values[loc]
-
- if not isinstance(where, Index):
- where = Index(where)
-
- locs = self.index.asof_locs(where, notnull(values))
- new_values = algos.take_1d(values, locs)
- return self._constructor(new_values, index=where).__finalize__(self)
-
def to_timestamp(self, freq=None, how='start', copy=True):
"""
Cast to datetimeindex of timestamps, at *beginning* of period
diff --git a/pandas/hashtable.pyx b/pandas/hashtable.pyx
index f718c1ab0b8da..e1c3733a0449d 100644
--- a/pandas/hashtable.pyx
+++ b/pandas/hashtable.pyx
@@ -1075,7 +1075,8 @@ def mode_int64(int64_t[:] values):
@cython.wraparound(False)
@cython.boundscheck(False)
-def duplicated_int64(ndarray[int64_t, ndim=1] values, object keep='first'):
+def duplicated_int64(ndarray[int64_t, ndim=1] values,
+ object keep='first'):
cdef:
int ret = 0, k
int64_t value
diff --git a/pandas/indexes/category.py b/pandas/indexes/category.py
index 4c9ca43f7f25d..3b7c660f5faa1 100644
--- a/pandas/indexes/category.py
+++ b/pandas/indexes/category.py
@@ -281,7 +281,8 @@ def is_unique(self):
@Appender(base._shared_docs['duplicated'] % ibase._index_doc_kwargs)
def duplicated(self, keep='first'):
from pandas.hashtable import duplicated_int64
- return duplicated_int64(self.codes.astype('i8'), keep)
+ codes = self.codes.astype('i8')
+ return duplicated_int64(codes, keep)
def _to_safe_for_reshape(self):
""" convert to object if we are a categorical """
diff --git a/pandas/src/join.pyx b/pandas/src/join.pyx
index 8a9cf01375a68..a81ac0aa35d4e 100644
--- a/pandas/src/join.pyx
+++ b/pandas/src/join.pyx
@@ -125,6 +125,153 @@ def left_outer_join(ndarray[int64_t] left, ndarray[int64_t] right,
+def left_outer_asof_join(ndarray[int64_t] left, ndarray[int64_t] right,
+ Py_ssize_t max_groups, sort=True,
+ bint allow_exact_matches=1,
+ left_distance=None,
+ right_distance=None,
+ tolerance=None):
+
+ cdef:
+ Py_ssize_t i, j, k, count = 0
+ Py_ssize_t loc, left_pos, right_pos, position
+ Py_ssize_t offset
+ ndarray[int64_t] left_count, right_count
+ ndarray left_sorter, right_sorter, rev
+ ndarray[int64_t] left_indexer, right_indexer
+ int64_t lc, rc, tol, left_val, right_val, diff, indexer
+ ndarray[int64_t] ld, rd
+ bint has_tol = 0
+
+ # if we are using tolerance, set our objects
+ if left_distance is not None and right_distance is not None and tolerance is not None:
+ has_tol = 1
+ ld = left_distance
+ rd = right_distance
+ tol = tolerance
+
+ # NA group in location 0
+ left_sorter, left_count = groupsort_indexer(left, max_groups)
+ right_sorter, right_count = groupsort_indexer(right, max_groups)
+
+ # First pass, determine size of result set, do not use the NA group
+ for i in range(1, max_groups + 1):
+ if right_count[i] > 0:
+ count += left_count[i] * right_count[i]
+ else:
+ count += left_count[i]
+
+ # group 0 is the NA group
+ left_pos = 0
+ right_pos = 0
+ position = 0
+
+ # exclude the NA group
+ left_pos = left_count[0]
+ right_pos = right_count[0]
+
+ left_indexer = np.empty(count, dtype=np.int64)
+ right_indexer = np.empty(count, dtype=np.int64)
+
+ for i in range(1, max_groups + 1):
+ lc = left_count[i]
+ rc = right_count[i]
+
+ if rc == 0:
+ for j in range(lc):
+ indexer = position + j
+ left_indexer[indexer] = left_pos + j
+
+ # take the most recent value
+ # if we are not the first
+ if right_pos:
+
+ if has_tol:
+
+ left_val = ld[left_pos + j]
+ right_val = rd[right_pos - 1]
+ diff = left_val - right_val
+
+ # do we allow exact matches
+ if allow_exact_matches and diff > tol:
+ right_indexer[indexer] = -1
+ continue
+ elif not allow_exact_matches:
+ if diff >= tol:
+ right_indexer[indexer] = -1
+ continue
+
+ right_indexer[indexer] = right_pos - 1
+ else:
+ right_indexer[indexer] = -1
+ position += lc
+ else:
+ for j in range(lc):
+ offset = position + j * rc
+ for k in range(rc):
+
+ indexer = offset + k
+ left_indexer[indexer] = left_pos + j
+
+ if has_tol:
+
+ left_val = ld[left_pos + j]
+ right_val = rd[right_pos + k]
+ diff = left_val - right_val
+
+ # do we allow exact matches
+ if allow_exact_matches and diff > tol:
+ right_indexer[indexer] = -1
+ continue
+
+ # we don't allow exact matches
+ elif not allow_exact_matches:
+ if diff >= tol or not right_pos:
+ right_indexer[indexer] = -1
+ else:
+ right_indexer[indexer] = right_pos - 1
+ continue
+
+ else:
+
+ # do we allow exact matches
+ if not allow_exact_matches:
+
+ if right_pos:
+ right_indexer[indexer] = right_pos - 1
+ else:
+ right_indexer[indexer] = -1
+ continue
+
+ right_indexer[indexer] = right_pos + k
+ position += lc * rc
+ left_pos += lc
+ right_pos += rc
+
+ left_indexer = _get_result_indexer(left_sorter, left_indexer)
+ right_indexer = _get_result_indexer(right_sorter, right_indexer)
+
+ if not sort: # if not asked to sort, revert to original order
+ if len(left) == len(left_indexer):
+ # no multiple matches for any row on the left
+ # this is a short-cut to avoid groupsort_indexer
+ # otherwise, the `else` path also works in this case
+ if left_sorter.dtype != np.int_:
+ left_sorter = left_sorter.astype(np.int_)
+
+ rev = np.empty(len(left), dtype=np.int_)
+ rev.put(left_sorter, np.arange(len(left)))
+ else:
+ rev, _ = groupsort_indexer(left_indexer, len(left))
+
+ if rev.dtype != np.int_:
+ rev = rev.astype(np.int_)
+ right_indexer = right_indexer.take(rev)
+ left_indexer = left_indexer.take(rev)
+
+ return left_indexer, right_indexer
+
+
def full_outer_join(ndarray[int64_t] left, ndarray[int64_t] right,
Py_ssize_t max_groups):
cdef:
@@ -246,4 +393,3 @@ def ffill_by_group(ndarray[int64_t] indexer, ndarray[int64_t] group_ids,
last_obs[gid] = val
return result
-
diff --git a/pandas/tests/frame/test_asof.py b/pandas/tests/frame/test_asof.py
new file mode 100644
index 0000000000000..6c15c75cb5427
--- /dev/null
+++ b/pandas/tests/frame/test_asof.py
@@ -0,0 +1,72 @@
+# coding=utf-8
+
+import nose
+
+import numpy as np
+from pandas import DataFrame, date_range
+
+from pandas.util.testing import assert_frame_equal
+import pandas.util.testing as tm
+
+from .common import TestData
+
+
+class TestFrameAsof(TestData, tm.TestCase):
+ _multiprocess_can_split_ = True
+
+ def setUp(self):
+ self.N = N = 50
+ rng = date_range('1/1/1990', periods=N, freq='53s')
+ self.df = DataFrame({'A': np.arange(N), 'B': np.arange(N)},
+ index=rng)
+
+ def test_basic(self):
+
+ df = self.df.copy()
+ df.ix[15:30, 'A'] = np.nan
+ dates = date_range('1/1/1990', periods=self.N * 3,
+ freq='25s')
+
+ result = df.asof(dates)
+ self.assertTrue(result.notnull().all(1).all())
+ lb = df.index[14]
+ ub = df.index[30]
+
+ dates = list(dates)
+ result = df.asof(dates)
+ self.assertTrue(result.notnull().all(1).all())
+
+ mask = (result.index >= lb) & (result.index < ub)
+ rs = result[mask]
+ self.assertTrue((rs == 14).all(1).all())
+
+ def test_subset(self):
+
+ N = 10
+ rng = date_range('1/1/1990', periods=N, freq='53s')
+ df = DataFrame({'A': np.arange(N), 'B': np.arange(N)},
+ index=rng)
+ df.ix[4:8, 'A'] = np.nan
+ dates = date_range('1/1/1990', periods=N * 3,
+ freq='25s')
+
+ # with a subset of A should be the same
+ result = df.asof(dates, subset='A')
+ expected = df.asof(dates)
+ assert_frame_equal(result, expected)
+
+ # same with A/B
+ result = df.asof(dates, subset=['A', 'B'])
+ expected = df.asof(dates)
+ assert_frame_equal(result, expected)
+
+ # B gives self.df.asof
+ result = df.asof(dates, subset='B')
+ expected = df.resample('25s', closed='right').ffill().reindex(dates)
+ expected.iloc[20:] = 9
+
+ assert_frame_equal(result, expected)
+
+if __name__ == '__main__':
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
+ exit=False)
diff --git a/pandas/tests/series/test_asof.py b/pandas/tests/series/test_asof.py
new file mode 100644
index 0000000000000..e2092feab9004
--- /dev/null
+++ b/pandas/tests/series/test_asof.py
@@ -0,0 +1,158 @@
+# coding=utf-8
+
+import nose
+
+import numpy as np
+
+from pandas import (offsets, Series, notnull,
+ isnull, date_range, Timestamp)
+
+import pandas.util.testing as tm
+
+from .common import TestData
+
+
+class TestSeriesAsof(TestData, tm.TestCase):
+ _multiprocess_can_split_ = True
+
+ def test_basic(self):
+
+ # array or list or dates
+ N = 50
+ rng = date_range('1/1/1990', periods=N, freq='53s')
+ ts = Series(np.random.randn(N), index=rng)
+ ts[15:30] = np.nan
+ dates = date_range('1/1/1990', periods=N * 3, freq='25s')
+
+ result = ts.asof(dates)
+ self.assertTrue(notnull(result).all())
+ lb = ts.index[14]
+ ub = ts.index[30]
+
+ result = ts.asof(list(dates))
+ self.assertTrue(notnull(result).all())
+ lb = ts.index[14]
+ ub = ts.index[30]
+
+ mask = (result.index >= lb) & (result.index < ub)
+ rs = result[mask]
+ self.assertTrue((rs == ts[lb]).all())
+
+ val = result[result.index[result.index >= ub][0]]
+ self.assertEqual(ts[ub], val)
+
+ def test_scalar(self):
+
+ N = 30
+ rng = date_range('1/1/1990', periods=N, freq='53s')
+ ts = Series(np.arange(N), index=rng)
+ ts[5:10] = np.NaN
+ ts[15:20] = np.NaN
+
+ val1 = ts.asof(ts.index[7])
+ val2 = ts.asof(ts.index[19])
+
+ self.assertEqual(val1, ts[4])
+ self.assertEqual(val2, ts[14])
+
+ # accepts strings
+ val1 = ts.asof(str(ts.index[7]))
+ self.assertEqual(val1, ts[4])
+
+ # in there
+ result = ts.asof(ts.index[3])
+ self.assertEqual(result, ts[3])
+
+ # no as of value
+ d = ts.index[0] - offsets.BDay()
+ self.assertTrue(np.isnan(ts.asof(d)))
+
+ def test_with_nan(self):
+ # basic asof test
+ rng = date_range('1/1/2000', '1/2/2000', freq='4h')
+ s = Series(np.arange(len(rng)), index=rng)
+ r = s.resample('2h').mean()
+
+ result = r.asof(r.index)
+ expected = Series([0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6.],
+ index=date_range('1/1/2000', '1/2/2000', freq='2h'))
+ tm.assert_series_equal(result, expected)
+
+ r.iloc[3:5] = np.nan
+ result = r.asof(r.index)
+ expected = Series([0, 0, 1, 1, 1, 1, 3, 3, 4, 4, 5, 5, 6.],
+ index=date_range('1/1/2000', '1/2/2000', freq='2h'))
+ tm.assert_series_equal(result, expected)
+
+ r.iloc[-3:] = np.nan
+ result = r.asof(r.index)
+ expected = Series([0, 0, 1, 1, 1, 1, 3, 3, 4, 4, 4, 4, 4.],
+ index=date_range('1/1/2000', '1/2/2000', freq='2h'))
+ tm.assert_series_equal(result, expected)
+
+ def test_periodindex(self):
+ from pandas import period_range, PeriodIndex
+ # array or list or dates
+ N = 50
+ rng = period_range('1/1/1990', periods=N, freq='H')
+ ts = Series(np.random.randn(N), index=rng)
+ ts[15:30] = np.nan
+ dates = date_range('1/1/1990', periods=N * 3, freq='37min')
+
+ result = ts.asof(dates)
+ self.assertTrue(notnull(result).all())
+ lb = ts.index[14]
+ ub = ts.index[30]
+
+ result = ts.asof(list(dates))
+ self.assertTrue(notnull(result).all())
+ lb = ts.index[14]
+ ub = ts.index[30]
+
+ pix = PeriodIndex(result.index.values, freq='H')
+ mask = (pix >= lb) & (pix < ub)
+ rs = result[mask]
+ self.assertTrue((rs == ts[lb]).all())
+
+ ts[5:10] = np.nan
+ ts[15:20] = np.nan
+
+ val1 = ts.asof(ts.index[7])
+ val2 = ts.asof(ts.index[19])
+
+ self.assertEqual(val1, ts[4])
+ self.assertEqual(val2, ts[14])
+
+ # accepts strings
+ val1 = ts.asof(str(ts.index[7]))
+ self.assertEqual(val1, ts[4])
+
+ # in there
+ self.assertEqual(ts.asof(ts.index[3]), ts[3])
+
+ # no as of value
+ d = ts.index[0].to_timestamp() - offsets.BDay()
+ self.assertTrue(isnull(ts.asof(d)))
+
+ def test_errors(self):
+
+ s = Series([1, 2, 3],
+ index=[Timestamp('20130101'),
+ Timestamp('20130103'),
+ Timestamp('20130102')])
+
+ # non-monotonic
+ self.assertFalse(s.index.is_monotonic)
+ with self.assertRaises(ValueError):
+ s.asof(s.index[0])
+
+ # subset with Series
+ N = 10
+ rng = date_range('1/1/1990', periods=N, freq='53s')
+ s = Series(np.random.randn(N), index=rng)
+ with self.assertRaises(ValueError):
+ s.asof(s.index[0], subset='foo')
+
+if __name__ == '__main__':
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
+ exit=False)
diff --git a/pandas/tests/series/test_timeseries.py b/pandas/tests/series/test_timeseries.py
index 13b95ea97eedf..19acf54c7a3cb 100644
--- a/pandas/tests/series/test_timeseries.py
+++ b/pandas/tests/series/test_timeseries.py
@@ -3,10 +3,9 @@
from datetime import datetime
-from numpy import nan
import numpy as np
-from pandas import Index, Series, notnull, date_range
+from pandas import Index, Series, date_range
from pandas.tseries.index import DatetimeIndex
from pandas.tseries.tdi import TimedeltaIndex
@@ -179,51 +178,6 @@ def test_truncate(self):
before=self.ts.index[-1] + offset,
after=self.ts.index[0] - offset)
- def test_asof(self):
- # array or list or dates
- N = 50
- rng = date_range('1/1/1990', periods=N, freq='53s')
- ts = Series(np.random.randn(N), index=rng)
- ts[15:30] = np.nan
- dates = date_range('1/1/1990', periods=N * 3, freq='25s')
-
- result = ts.asof(dates)
- self.assertTrue(notnull(result).all())
- lb = ts.index[14]
- ub = ts.index[30]
-
- result = ts.asof(list(dates))
- self.assertTrue(notnull(result).all())
- lb = ts.index[14]
- ub = ts.index[30]
-
- mask = (result.index >= lb) & (result.index < ub)
- rs = result[mask]
- self.assertTrue((rs == ts[lb]).all())
-
- val = result[result.index[result.index >= ub][0]]
- self.assertEqual(ts[ub], val)
-
- self.ts[5:10] = np.NaN
- self.ts[15:20] = np.NaN
-
- val1 = self.ts.asof(self.ts.index[7])
- val2 = self.ts.asof(self.ts.index[19])
-
- self.assertEqual(val1, self.ts[4])
- self.assertEqual(val2, self.ts[14])
-
- # accepts strings
- val1 = self.ts.asof(str(self.ts.index[7]))
- self.assertEqual(val1, self.ts[4])
-
- # in there
- self.assertEqual(self.ts.asof(self.ts.index[3]), self.ts[3])
-
- # no as of value
- d = self.ts.index[0] - datetools.bday
- self.assertTrue(np.isnan(self.ts.asof(d)))
-
def test_getitem_setitem_datetimeindex(self):
from pandas import date_range
@@ -424,68 +378,6 @@ def test_getitem_setitem_periodindex(self):
result[4:8] = ts[4:8]
assert_series_equal(result, ts)
- def test_asof_periodindex(self):
- from pandas import period_range, PeriodIndex
- # array or list or dates
- N = 50
- rng = period_range('1/1/1990', periods=N, freq='H')
- ts = Series(np.random.randn(N), index=rng)
- ts[15:30] = np.nan
- dates = date_range('1/1/1990', periods=N * 3, freq='37min')
-
- result = ts.asof(dates)
- self.assertTrue(notnull(result).all())
- lb = ts.index[14]
- ub = ts.index[30]
-
- result = ts.asof(list(dates))
- self.assertTrue(notnull(result).all())
- lb = ts.index[14]
- ub = ts.index[30]
-
- pix = PeriodIndex(result.index.values, freq='H')
- mask = (pix >= lb) & (pix < ub)
- rs = result[mask]
- self.assertTrue((rs == ts[lb]).all())
-
- ts[5:10] = np.NaN
- ts[15:20] = np.NaN
-
- val1 = ts.asof(ts.index[7])
- val2 = ts.asof(ts.index[19])
-
- self.assertEqual(val1, ts[4])
- self.assertEqual(val2, ts[14])
-
- # accepts strings
- val1 = ts.asof(str(ts.index[7]))
- self.assertEqual(val1, ts[4])
-
- # in there
- self.assertEqual(ts.asof(ts.index[3]), ts[3])
-
- # no as of value
- d = ts.index[0].to_timestamp() - datetools.bday
- self.assertTrue(np.isnan(ts.asof(d)))
-
- def test_asof_more(self):
- from pandas import date_range
-
- s = Series([nan, nan, 1, 2, nan, nan, 3, 4, 5],
- index=date_range('1/1/2000', periods=9))
-
- dates = s.index[[4, 5, 6, 2, 1]]
-
- result = s.asof(dates)
- expected = Series([2, 2, 3, 1, np.nan], index=dates)
-
- assert_series_equal(result, expected)
-
- s = Series([1.5, 2.5, 1, 2, nan, nan, 3, 4, 5],
- index=date_range('1/1/2000', periods=9))
- result = s.asof(s.index[0])
- self.assertEqual(result, s[0])
-
def test_asfreq(self):
ts = Series([0., 1., 2.], index=[datetime(2009, 10, 30), datetime(
2009, 11, 30), datetime(2009, 12, 31)])
diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
index 182c0637ae29c..f963a271a767e 100644
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -2,23 +2,30 @@
SQL-style merge routines
"""
+import copy
import warnings
import numpy as np
from pandas.compat import range, lrange, lzip, zip, map, filter
import pandas.compat as compat
-from pandas.core.categorical import Categorical
-from pandas.core.frame import DataFrame, _merge_doc
+from pandas import (Categorical, DataFrame, Series,
+ Index, MultiIndex, Timedelta)
+from pandas.core.frame import _merge_doc
from pandas.core.generic import NDFrame
-from pandas.core.series import Series
-from pandas.core.index import (Index, MultiIndex, _get_combined_index,
+from pandas.core.index import (_get_combined_index,
_ensure_index, _get_consensus_names,
_all_indexes_same)
from pandas.core.internals import (items_overlap_with_suffix,
concatenate_block_managers)
from pandas.util.decorators import Appender, Substitution
-from pandas.core.common import ABCSeries
+from pandas.core.common import (ABCSeries, is_dtype_equal,
+ is_datetime64_dtype,
+ is_int64_dtype,
+ is_integer,
+ is_bool,
+ is_list_like,
+ needs_i8_conversion)
import pandas.core.algorithms as algos
import pandas.core.common as com
@@ -47,9 +54,100 @@ class MergeError(ValueError):
pass
-def ordered_merge(left, right, on=None, left_by=None, right_by=None,
+def _groupby_and_merge(by, on, left, right, _merge_pieces,
+ check_duplicates=True):
+ """
+ groupby & merge; we are always performing a left-by type operation
+
+ Parameters
+ ----------
+ by: field to group
+ on: duplicates field
+ left: left frame
+ right: right frame
+ _merge_pieces: function for merging
+ check_duplicates: boolean, default True
+ should we check & clean duplicates
+ """
+
+ pieces = []
+ if not isinstance(by, (list, tuple)):
+ by = [by]
+
+ lby = left.groupby(by, sort=False)
+
+ # if we can groupby the rhs
+ # then we can get vastly better perf
+ try:
+
+ # we will check & remove duplicates if indicated
+ if check_duplicates:
+ if on is None:
+ on = []
+ elif not isinstance(on, (list, tuple)):
+ on = [on]
+
+ if right.duplicated(by + on).any():
+ right = right.drop_duplicates(by + on, keep='last')
+ rby = right.groupby(by, sort=False)
+ except KeyError:
+ rby = None
+
+ for key, lhs in lby:
+
+ if rby is None:
+ rhs = right
+ else:
+ try:
+ rhs = right.take(rby.indices[key])
+ except KeyError:
+ # key doesn't exist in left
+ lcols = lhs.columns.tolist()
+ cols = lcols + [r for r in right.columns
+ if r not in set(lcols)]
+ merged = lhs.reindex(columns=cols)
+ merged.index = range(len(merged))
+ pieces.append(merged)
+ continue
+
+ merged = _merge_pieces(lhs, rhs)
+
+ # make sure join keys are in the merged
+ # TODO, should _merge_pieces do this?
+ for k in by:
+ try:
+ if k in merged:
+ merged[k] = key
+ except:
+ pass
+
+ pieces.append(merged)
+
+ # preserve the original order
+ # if we have a missing piece this can be reset
+ result = concat(pieces, ignore_index=True)
+ result = result.reindex(columns=pieces[0].columns, copy=False)
+ return result, lby
+
+
+def ordered_merge(left, right, on=None,
left_on=None, right_on=None,
+ left_by=None, right_by=None,
fill_method=None, suffixes=('_x', '_y')):
+
+ warnings.warn("ordered_merge is deprecated and replace by merged_ordered",
+ FutureWarning, stacklevel=2)
+ return merge_ordered(left, right, on=on,
+ left_on=left_on, right_on=right_on,
+ left_by=left_by, right_by=right_by,
+ fill_method=fill_method, suffixes=suffixes)
+
+
+def merge_ordered(left, right, on=None,
+ left_on=None, right_on=None,
+ left_by=None, right_by=None,
+ fill_method=None, suffixes=('_x', '_y'),
+ how='outer'):
"""Perform merge with optional filling/interpolation designed for ordered
data like time series data. Optionally perform group-wise merge (see
examples)
@@ -58,8 +156,6 @@ def ordered_merge(left, right, on=None, left_by=None, right_by=None,
----------
left : DataFrame
right : DataFrame
- fill_method : {'ffill', None}, default None
- Interpolation method for data
on : label or list
Field names to join on. Must be found in both DataFrames.
left_on : label or list, or array-like
@@ -75,9 +171,18 @@ def ordered_merge(left, right, on=None, left_by=None, right_by=None,
right_by : column name or list of column names
Group right DataFrame by group columns and merge piece by piece with
left DataFrame
+ fill_method : {'ffill', None}, default None
+ Interpolation method for data
suffixes : 2-length sequence (tuple, list, ...)
Suffix to apply to overlapping column names in the left and right
side, respectively
+ how : {'left', 'right', 'outer', 'inner'}, default 'outer'
+ * left: use only keys from left frame (SQL: left outer join)
+ * right: use only keys from right frame (SQL: right outer join)
+ * outer: use union of keys from both frames (SQL: full outer join)
+ * inner: use intersection of keys from both frames (SQL: inner join)
+
+ .. versionadded 0.18.2
Examples
--------
@@ -110,46 +215,243 @@ def ordered_merge(left, right, on=None, left_by=None, right_by=None,
merged : DataFrame
The output type will the be same as 'left', if it is a subclass
of DataFrame.
+
+ See also
+ --------
+ merge
+ merge_asof
+
"""
def _merger(x, y):
+ # perform the ordered merge operation
op = _OrderedMerge(x, y, on=on, left_on=left_on, right_on=right_on,
- # left_index=left_index, right_index=right_index,
- suffixes=suffixes, fill_method=fill_method)
+ suffixes=suffixes, fill_method=fill_method,
+ how=how)
return op.get_result()
if left_by is not None and right_by is not None:
raise ValueError('Can only group either left or right frames')
elif left_by is not None:
- if not isinstance(left_by, (list, tuple)):
- left_by = [left_by]
- pieces = []
- for key, xpiece in left.groupby(left_by):
- merged = _merger(xpiece, right)
- for k in left_by:
- # May have passed ndarray
- try:
- if k in merged:
- merged[k] = key
- except:
- pass
- pieces.append(merged)
- return concat(pieces, ignore_index=True)
+ result, _ = _groupby_and_merge(left_by, on, left, right,
+ lambda x, y: _merger(x, y),
+ check_duplicates=False)
elif right_by is not None:
- if not isinstance(right_by, (list, tuple)):
- right_by = [right_by]
- pieces = []
- for key, ypiece in right.groupby(right_by):
- merged = _merger(left, ypiece)
- for k in right_by:
- try:
- if k in merged:
- merged[k] = key
- except:
- pass
- pieces.append(merged)
- return concat(pieces, ignore_index=True)
+ result, _ = _groupby_and_merge(right_by, on, right, left,
+ lambda x, y: _merger(y, x),
+ check_duplicates=False)
else:
- return _merger(left, right)
+ result = _merger(left, right)
+ return result
+
+
+def merge_asof(left, right, on=None,
+ left_on=None, right_on=None,
+ by=None,
+ suffixes=('_x', '_y'),
+ tolerance=None,
+ allow_exact_matches=True,
+ check_duplicates=True):
+ """Perform an asof merge. This is similar to a left-join except that we
+ match on nearest key rather than equal keys.
+
+ For each row in the left DataFrame, we select the last row in the right
+ DataFrame whose 'on' key is less than or equal to the left's key. Both
+ DataFrames must be sorted by the key.
+
+ Optionally perform group-wise merge. This searches for the nearest match
+ on the 'on' key within the same group according to 'by'.
+
+ .. versionadded 0.18.2
+
+ Parameters
+ ----------
+ left : DataFrame
+ right : DataFrame
+ on : label or list
+ Field names to join on. Must be found in both DataFrames.
+ The data MUST be ordered. Furthermore this must be a numeric column,
+ typically a datetimelike or integer. On or left_on/right_on
+ must be given.
+ left_on : label or list, or array-like
+ Field names to join on in left DataFrame. Can be a vector or list of
+ vectors of the length of the DataFrame to use a particular vector as
+ the join key instead of columns
+ right_on : label or list, or array-like
+ Field names to join on in right DataFrame or vector/list of vectors per
+ left_on docs
+ by : column name or list of column names
+ Group both the left and right DataFrames by the group columns; perform
+ the merge operation on these pieces and recombine.
+ suffixes : 2-length sequence (tuple, list, ...)
+ Suffix to apply to overlapping column names in the left and right
+ side, respectively
+ tolerance : integer or Timedelta, optional, default None
+ select asof tolerance within this range; must be compatible
+ to the merge index.
+ allow_exact_matches : boolean, default True
+
+ - If True, allow matching the same 'on' value
+ (i.e. less-than-or-equal-to)
+ - If False, don't match the same 'on' value
+ (i.e., stricly less-than)
+
+ check_duplicates : boolean, default True
+
+ - If True, check and remove duplicates for the right
+ DataFrame, on the [by, on] combination, keeping the last value.
+ - If False, no check for duplicates. If you *know* that
+ you don't have duplicates, then turning off the check for duplicates
+ can be more performant.
+
+ Returns
+ -------
+ merged : DataFrame
+
+ Examples
+ --------
+ >>> left
+ a left_val
+ 0 1 a
+ 1 5 b
+ 2 10 c
+
+ >>> right
+ a right_val
+ 0 1 1
+ 1 2 2
+ 2 3 3
+ 3 6 6
+ 4 7 7
+
+ >>> pd.merge_asof(left, right, on='a')
+ a left_val right_val
+ 0 1 a 1
+ 1 5 b 3
+ 2 10 c 7
+
+ >>> pd.merge_asof(left, right, on='a', allow_exact_matches=False)
+ a left_val right_val
+ 0 1 a NaN
+ 1 5 b 3.0
+ 2 10 c 7.0
+
+ For this example, we can achieve a similar result thru pd.merge_ordered,
+ though its not nearly as performant.
+
+
+ >>> (pd.merge_ordered(left, right, on='a')
+ ... .ffill()
+ ... .drop_duplicates(['left_val'])
+ ... )
+ a left_val right_val
+ 0 1 a 1.0
+ 3 5 b 3.0
+ 6 10 c 7.0
+
+ Here is a real-worth times-series example
+
+ >>> quotes
+ time ticker bid ask
+ 0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
+ 1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
+ 2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
+ 3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
+ 4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
+ 5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
+ 6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
+ 7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
+
+ >>> trades
+ time ticker price quantity
+ 0 2016-05-25 13:30:00.023 MSFT 51.95 75
+ 1 2016-05-25 13:30:00.038 MSFT 51.95 155
+ 2 2016-05-25 13:30:00.048 GOOG 720.77 100
+ 3 2016-05-25 13:30:00.048 GOOG 720.92 100
+ 4 2016-05-25 13:30:00.048 AAPL 98.00 100
+
+ # by default we are taking the asof of the quotes
+ >>> pd.asof_merge(trades, quotes,
+ ... on='time',
+ ... by='ticker')
+ time ticker price quantity bid ask
+ 0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
+ 1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
+ 2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
+ 3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
+ 4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
+
+ # we only asof within 2ms betwen the quote time and the trade time
+ >>> pd.asof_merge(trades, quotes,
+ ... on='time',
+ ... by='ticker',
+ ... tolerance=pd.Timedelta('2ms'))
+ time ticker price quantity bid ask
+ 0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
+ 1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN
+ 2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
+ 3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
+ 4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
+
+ # we only asof within 10ms betwen the quote time and the trade time
+ # and we exclude exact matches on time. However *prior* data will
+ # propogate forward
+ >>> pd.asof_merge(trades, quotes,
+ ... on='time',
+ ... by='ticker',
+ ... tolerance=pd.Timedelta('10ms'),
+ ... allow_exact_matches=False)
+ time ticker price quantity bid ask
+ 0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN
+ 1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
+ 2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
+ 3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
+ 4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
+
+ See also
+ --------
+ merge
+ merge_ordered
+
+ """
+ def _merger(x, y):
+ # perform the ordered merge operation
+ op = _AsOfMerge(x, y,
+ on=on, left_on=left_on, right_on=right_on,
+ by=by, suffixes=suffixes,
+ how='asof', tolerance=tolerance,
+ allow_exact_matches=allow_exact_matches)
+ return op.get_result()
+
+ if by is not None:
+ result, groupby = _groupby_and_merge(by, on, left, right,
+ lambda x, y: _merger(x, y),
+ check_duplicates=check_duplicates)
+
+ # we want to preserve the original order
+ # we had grouped, so need to reverse this
+ # if we DO have duplicates, then
+ # we cannot guarantee order
+
+ sorter = np.concatenate([groupby.indices[g] for g, _ in groupby])
+ if len(result) != len(sorter):
+ if check_duplicates:
+ raise AssertionError("invalid reverse grouping")
+ return result
+
+ rev = np.empty(len(sorter), dtype=np.int_)
+ rev.put(sorter, np.arange(len(sorter)))
+ return result.take(rev).reset_index(drop=True)
+
+ if check_duplicates:
+ if on is None:
+ on = []
+ elif not isinstance(on, (list, tuple)):
+ on = [on]
+
+ if right.duplicated(on).any():
+ right = right.drop_duplicates(on, keep='last')
+
+ return _merger(left, right)
# TODO: transformations??
@@ -159,6 +461,7 @@ class _MergeOperation(object):
Perform a database (SQL) merge operation between two DataFrame objects
using either columns as keys or their row indexes
"""
+ _merge_type = 'merge'
def __init__(self, left, right, how='inner', on=None,
left_on=None, right_on=None, axis=1,
@@ -206,6 +509,8 @@ def __init__(self, left, right, how='inner', on=None,
msg = msg.format(left.columns.nlevels, right.columns.nlevels)
warnings.warn(msg, UserWarning)
+ self._validate_specification()
+
# note this function has side effects
(self.left_join_keys,
self.right_join_keys,
@@ -233,7 +538,7 @@ def get_result(self):
concat_axis=0, copy=self.copy)
typ = self.left._constructor
- result = typ(result_data).__finalize__(self, method='merge')
+ result = typ(result_data).__finalize__(self, method=self._merge_type)
if self.indicator:
result = self._indicator_post_merge(result)
@@ -304,8 +609,8 @@ def _maybe_add_join_keys(self, result, left_indexer, right_indexer):
if left_has_missing:
take_right = self.right_join_keys[i]
- if not com.is_dtype_equal(result[name].dtype,
- self.left[name].dtype):
+ if not is_dtype_equal(result[name].dtype,
+ self.left[name].dtype):
take_left = self.left[name]._values
elif name in self.right:
@@ -316,8 +621,8 @@ def _maybe_add_join_keys(self, result, left_indexer, right_indexer):
if right_has_missing:
take_left = self.left_join_keys[i]
- if not com.is_dtype_equal(result[name].dtype,
- self.right[name].dtype):
+ if not is_dtype_equal(result[name].dtype,
+ self.right[name].dtype):
take_right = self.right[name]._values
elif left_indexer is not None \
@@ -355,6 +660,13 @@ def _maybe_add_join_keys(self, result, left_indexer, right_indexer):
else:
result.insert(i, name or 'key_%d' % i, key_col)
+ def _get_join_indexers(self):
+ """ return the join indexers """
+ return _get_join_indexers(self.left_join_keys,
+ self.right_join_keys,
+ sort=self.sort,
+ how=self.how)
+
def _get_join_info(self):
left_ax = self.left._data.axes[self.axis]
right_ax = self.right._data.axes[self.axis]
@@ -373,9 +685,8 @@ def _get_join_info(self):
sort=self.sort)
else:
(left_indexer,
- right_indexer) = _get_join_indexers(self.left_join_keys,
- self.right_join_keys,
- sort=self.sort, how=self.how)
+ right_indexer) = self._get_join_indexers()
+
if self.right_index:
if len(self.left) > 0:
join_index = self.left.index.take(left_indexer)
@@ -429,8 +740,6 @@ def _get_merge_keys(self):
-------
left_keys, right_keys
"""
- self._validate_specification()
-
left_keys = []
right_keys = []
join_names = []
@@ -549,7 +858,8 @@ def _validate_specification(self):
raise ValueError("len(right_on) must equal len(left_on)")
-def _get_join_indexers(left_keys, right_keys, sort=False, how='inner'):
+def _get_join_indexers(left_keys, right_keys, sort=False, how='inner',
+ **kwargs):
"""
Parameters
@@ -579,26 +889,27 @@ def _get_join_indexers(left_keys, right_keys, sort=False, how='inner'):
lkey, rkey, count = fkeys(lkey, rkey)
# preserve left frame order if how == 'left' and sort == False
- kwargs = {'sort': sort} if how == 'left' else {}
+ kwargs = copy.copy(kwargs)
+ if how == 'left':
+ kwargs['sort'] = sort
join_func = _join_functions[how]
+
return join_func(lkey, rkey, count, **kwargs)
class _OrderedMerge(_MergeOperation):
+ _merge_type = 'ordered_merge'
- def __init__(self, left, right, on=None, by=None, left_on=None,
- right_on=None, axis=1, left_index=False, right_index=False,
+ def __init__(self, left, right, on=None, left_on=None,
+ right_on=None, axis=1,
suffixes=('_x', '_y'), copy=True,
- fill_method=None):
+ fill_method=None, how='outer'):
self.fill_method = fill_method
-
_MergeOperation.__init__(self, left, right, on=on, left_on=left_on,
right_on=right_on, axis=axis,
- left_index=left_index,
- right_index=right_index,
- how='outer', suffixes=suffixes,
- sort=True # sorts when factorizing
+ how=how, suffixes=suffixes,
+ sort=True # factorize sorts
)
def get_result(self):
@@ -629,13 +940,133 @@ def get_result(self):
concat_axis=0, copy=self.copy)
typ = self.left._constructor
- result = typ(result_data).__finalize__(self, method='ordered_merge')
+ result = typ(result_data).__finalize__(self, method=self._merge_type)
self._maybe_add_join_keys(result, left_indexer, right_indexer)
return result
+class _AsOfMerge(_OrderedMerge):
+ _merge_type = 'asof_merge'
+
+ def __init__(self, left, right, on=None, by=None, left_on=None,
+ right_on=None, axis=1,
+ suffixes=('_x', '_y'), copy=True,
+ fill_method=None,
+ how='asof', tolerance=None,
+ allow_exact_matches=True):
+
+ self.by = by
+ self.tolerance = tolerance
+ self.allow_exact_matches = allow_exact_matches
+
+ _OrderedMerge.__init__(self, left, right, on=on, left_on=left_on,
+ right_on=right_on, axis=axis,
+ how=how, suffixes=suffixes,
+ fill_method=fill_method)
+
+ def _validate_specification(self):
+ super(_AsOfMerge, self)._validate_specification()
+
+ # we only allow on to be a single item for on
+ if len(self.left_on) != 1:
+ raise MergeError("can only asof on a key for left")
+
+ if len(self.right_on) != 1:
+ raise MergeError("can only asof on a key for right")
+
+ # add by to our key-list so we can have it in the
+ # output as a key
+ if self.by is not None:
+ if not is_list_like(self.by):
+ self.by = [self.by]
+
+ self.left_on = self.by + list(self.left_on)
+ self.right_on = self.by + list(self.right_on)
+
+ @property
+ def _asof_key(self):
+ """ This is our asof key, the 'on' """
+ return self.left_on[-1]
+
+ def _get_merge_keys(self):
+
+ # note this function has side effects
+ (left_join_keys,
+ right_join_keys,
+ join_names) = super(_AsOfMerge, self)._get_merge_keys()
+
+ # validate index types are the same
+ for lk, rk in zip(left_join_keys, right_join_keys):
+ if not is_dtype_equal(lk.dtype, rk.dtype):
+ raise MergeError("incompatible merge keys, "
+ "must be the same type")
+
+ # validate tolerance; must be a Timedelta if we have a DTI
+ if self.tolerance is not None:
+
+ lt = left_join_keys[self.left_on.index(self._asof_key)]
+ msg = "incompatible tolerance, must be compat " \
+ "with type {0}".format(type(lt))
+
+ if is_datetime64_dtype(lt):
+ if not isinstance(self.tolerance, Timedelta):
+ raise MergeError(msg)
+ if self.tolerance < Timedelta(0):
+ raise MergeError("tolerance must be positive")
+
+ elif is_int64_dtype(lt):
+ if not is_integer(self.tolerance):
+ raise MergeError(msg)
+ if self.tolerance < 0:
+ raise MergeError("tolerance must be positive")
+
+ else:
+ raise MergeError(msg)
+
+ # validate allow_exact_matches
+ if not is_bool(self.allow_exact_matches):
+ raise MergeError("allow_exact_matches must be boolean, "
+ "passed {0}".format(self.allow_exact_matches))
+
+ return left_join_keys, right_join_keys, join_names
+
+ def _get_join_indexers(self):
+ """ return the join indexers """
+
+ # we required sortedness in the join keys
+ msg = " keys must be sorted"
+ for lk in self.left_join_keys:
+ if not Index(lk).is_monotonic:
+ raise ValueError('left' + msg)
+ for rk in self.right_join_keys:
+ if not Index(rk).is_monotonic:
+ raise ValueError('right' + msg)
+
+ kwargs = {}
+
+ # tolerance
+ t = self.tolerance
+ if t is not None:
+ lt = self.left_join_keys[self.left_on.index(self._asof_key)]
+ rt = self.right_join_keys[self.right_on.index(self._asof_key)]
+ if needs_i8_conversion(lt):
+ lt = lt.view('i8')
+ t = t.value
+ rt = rt.view('i8')
+ kwargs['left_distance'] = lt
+ kwargs['right_distance'] = rt
+ kwargs['tolerance'] = t
+
+ return _get_join_indexers(self.left_join_keys,
+ self.right_join_keys,
+ sort=self.sort,
+ how=self.how,
+ allow_exact_matches=self.allow_exact_matches,
+ **kwargs)
+
+
def _get_multiindex_indexer(join_keys, index, sort):
from functools import partial
@@ -717,6 +1148,7 @@ def _right_outer_join(x, y, max_groups):
'left': _algos.left_outer_join,
'right': _right_outer_join,
'outer': _algos.full_outer_join,
+ 'asof': _algos.left_outer_asof_join,
}
@@ -724,6 +1156,7 @@ def _factorize_keys(lk, rk, sort=True):
if com.is_datetime64tz_dtype(lk) and com.is_datetime64tz_dtype(rk):
lk = lk.values
rk = rk.values
+
if com.is_int_or_datetime_dtype(lk) and com.is_int_or_datetime_dtype(rk):
klass = _hash.Int64Factorizer
lk = com._ensure_int64(com._values_from_object(lk))
diff --git a/pandas/tools/tests/data/allow_exact_matches.csv b/pandas/tools/tests/data/allow_exact_matches.csv
new file mode 100644
index 0000000000000..0446fb744c540
--- /dev/null
+++ b/pandas/tools/tests/data/allow_exact_matches.csv
@@ -0,0 +1,28 @@
+time,ticker,price,quantity,marketCenter,bid,ask
+20160525 13:30:00.023,MSFT,51.95,75,NASDAQ,,
+20160525 13:30:00.038,MSFT,51.95,155,NASDAQ,51.95,51.95
+20160525 13:30:00.048,GOOG,720.77,100,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.92,100,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,200,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,300,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,600,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,44,NASDAQ,720.5,720.93
+20160525 13:30:00.074,AAPL,98.67,478343,NASDAQ,,
+20160525 13:30:00.075,AAPL,98.67,478343,NASDAQ,,
+20160525 13:30:00.075,AAPL,98.66,6,NASDAQ,,
+20160525 13:30:00.075,AAPL,98.65,30,NASDAQ,,
+20160525 13:30:00.075,AAPL,98.65,75,NASDAQ,,
+20160525 13:30:00.075,AAPL,98.65,20,NASDAQ,,
+20160525 13:30:00.075,AAPL,98.65,35,NASDAQ,,
+20160525 13:30:00.075,AAPL,98.65,10,NASDAQ,,
+20160525 13:30:00.075,AAPL,98.55,6,ARCA,,
+20160525 13:30:00.075,AAPL,98.55,6,ARCA,,
+20160525 13:30:00.076,AAPL,98.56,1000,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,200,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,300,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,400,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,600,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,200,ARCA,98.55,98.56
+20160525 13:30:00.078,MSFT,51.95,783,NASDAQ,51.95,51.95
+20160525 13:30:00.078,MSFT,51.95,100,NASDAQ,51.95,51.95
+20160525 13:30:00.078,MSFT,51.95,100,NASDAQ,51.95,51.95
diff --git a/pandas/tools/tests/data/allow_exact_matches_and_tolerance.csv b/pandas/tools/tests/data/allow_exact_matches_and_tolerance.csv
new file mode 100644
index 0000000000000..0446fb744c540
--- /dev/null
+++ b/pandas/tools/tests/data/allow_exact_matches_and_tolerance.csv
@@ -0,0 +1,28 @@
+time,ticker,price,quantity,marketCenter,bid,ask
+20160525 13:30:00.023,MSFT,51.95,75,NASDAQ,,
+20160525 13:30:00.038,MSFT,51.95,155,NASDAQ,51.95,51.95
+20160525 13:30:00.048,GOOG,720.77,100,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.92,100,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,200,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,300,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,600,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,44,NASDAQ,720.5,720.93
+20160525 13:30:00.074,AAPL,98.67,478343,NASDAQ,,
+20160525 13:30:00.075,AAPL,98.67,478343,NASDAQ,,
+20160525 13:30:00.075,AAPL,98.66,6,NASDAQ,,
+20160525 13:30:00.075,AAPL,98.65,30,NASDAQ,,
+20160525 13:30:00.075,AAPL,98.65,75,NASDAQ,,
+20160525 13:30:00.075,AAPL,98.65,20,NASDAQ,,
+20160525 13:30:00.075,AAPL,98.65,35,NASDAQ,,
+20160525 13:30:00.075,AAPL,98.65,10,NASDAQ,,
+20160525 13:30:00.075,AAPL,98.55,6,ARCA,,
+20160525 13:30:00.075,AAPL,98.55,6,ARCA,,
+20160525 13:30:00.076,AAPL,98.56,1000,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,200,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,300,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,400,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,600,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,200,ARCA,98.55,98.56
+20160525 13:30:00.078,MSFT,51.95,783,NASDAQ,51.95,51.95
+20160525 13:30:00.078,MSFT,51.95,100,NASDAQ,51.95,51.95
+20160525 13:30:00.078,MSFT,51.95,100,NASDAQ,51.95,51.95
diff --git a/pandas/tools/tests/data/asof.csv b/pandas/tools/tests/data/asof.csv
new file mode 100644
index 0000000000000..d7d061bc46ccc
--- /dev/null
+++ b/pandas/tools/tests/data/asof.csv
@@ -0,0 +1,28 @@
+time,ticker,price,quantity,marketCenter,bid,ask
+20160525 13:30:00.023,MSFT,51.95,75,NASDAQ,51.95,51.95
+20160525 13:30:00.038,MSFT,51.95,155,NASDAQ,51.95,51.95
+20160525 13:30:00.048,GOOG,720.77,100,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.92,100,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,200,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,300,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,600,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,44,NASDAQ,720.5,720.93
+20160525 13:30:00.074,AAPL,98.67,478343,NASDAQ,,
+20160525 13:30:00.075,AAPL,98.67,478343,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.66,6,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.65,30,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.65,75,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.65,20,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.65,35,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.65,10,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.55,6,ARCA,98.55,98.56
+20160525 13:30:00.075,AAPL,98.55,6,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,1000,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,200,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,300,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,400,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,600,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,200,ARCA,98.55,98.56
+20160525 13:30:00.078,MSFT,51.95,783,NASDAQ,51.92,51.95
+20160525 13:30:00.078,MSFT,51.95,100,NASDAQ,51.92,51.95
+20160525 13:30:00.078,MSFT,51.95,100,NASDAQ,51.92,51.95
diff --git a/pandas/tools/tests/data/asof2.csv b/pandas/tools/tests/data/asof2.csv
new file mode 100644
index 0000000000000..2c9c0392dd617
--- /dev/null
+++ b/pandas/tools/tests/data/asof2.csv
@@ -0,0 +1,78 @@
+time,ticker,price,quantity,marketCenter,bid,ask
+20160525 13:30:00.023,MSFT,51.95,75,NASDAQ,51.95,51.95
+20160525 13:30:00.038,MSFT,51.95,155,NASDAQ,51.95,51.95
+20160525 13:30:00.048,GOOG,720.77,100,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.92,100,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,200,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,300,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,600,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,44,NASDAQ,720.5,720.93
+20160525 13:30:00.074,AAPL,98.67,478343,NASDAQ,,
+20160525 13:30:00.075,AAPL,98.67,478343,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.66,6,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.65,30,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.65,75,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.65,20,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.65,35,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.65,10,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.55,6,ARCA,98.55,98.56
+20160525 13:30:00.075,AAPL,98.55,6,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,1000,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,200,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,300,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,400,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,600,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,200,ARCA,98.55,98.56
+20160525 13:30:00.078,MSFT,51.95,783,NASDAQ,51.92,51.95
+20160525 13:30:00.078,MSFT,51.95,100,NASDAQ,51.92,51.95
+20160525 13:30:00.078,MSFT,51.95,100,NASDAQ,51.92,51.95
+20160525 13:30:00.084,AAPL,98.64,40,NASDAQ,98.55,98.56
+20160525 13:30:00.084,AAPL,98.55,149,EDGX,98.55,98.56
+20160525 13:30:00.086,AAPL,98.56,500,ARCA,98.55,98.63
+20160525 13:30:00.104,AAPL,98.63,647,EDGX,98.62,98.63
+20160525 13:30:00.104,AAPL,98.63,300,EDGX,98.62,98.63
+20160525 13:30:00.104,AAPL,98.63,50,NASDAQ,98.62,98.63
+20160525 13:30:00.104,AAPL,98.63,50,NASDAQ,98.62,98.63
+20160525 13:30:00.104,AAPL,98.63,70,NASDAQ,98.62,98.63
+20160525 13:30:00.104,AAPL,98.63,70,NASDAQ,98.62,98.63
+20160525 13:30:00.104,AAPL,98.63,1,NASDAQ,98.62,98.63
+20160525 13:30:00.104,AAPL,98.63,62,NASDAQ,98.62,98.63
+20160525 13:30:00.104,AAPL,98.63,10,NASDAQ,98.62,98.63
+20160525 13:30:00.104,AAPL,98.63,100,ARCA,98.62,98.63
+20160525 13:30:00.105,AAPL,98.63,100,ARCA,98.62,98.63
+20160525 13:30:00.105,AAPL,98.63,700,ARCA,98.62,98.63
+20160525 13:30:00.106,AAPL,98.63,61,EDGX,98.62,98.63
+20160525 13:30:00.107,AAPL,98.63,100,ARCA,98.62,98.63
+20160525 13:30:00.107,AAPL,98.63,53,ARCA,98.62,98.63
+20160525 13:30:00.108,AAPL,98.63,100,ARCA,98.62,98.63
+20160525 13:30:00.108,AAPL,98.63,839,ARCA,98.62,98.63
+20160525 13:30:00.115,AAPL,98.63,5,EDGX,98.62,98.63
+20160525 13:30:00.118,AAPL,98.63,295,EDGX,98.62,98.63
+20160525 13:30:00.118,AAPL,98.63,5,EDGX,98.62,98.63
+20160525 13:30:00.128,AAPL,98.63,100,NASDAQ,98.62,98.63
+20160525 13:30:00.128,AAPL,98.63,100,NASDAQ,98.62,98.63
+20160525 13:30:00.128,MSFT,51.92,100,ARCA,51.92,51.95
+20160525 13:30:00.129,AAPL,98.62,100,NASDAQ,98.61,98.63
+20160525 13:30:00.129,AAPL,98.62,10,NASDAQ,98.61,98.63
+20160525 13:30:00.129,AAPL,98.62,59,NASDAQ,98.61,98.63
+20160525 13:30:00.129,AAPL,98.62,31,NASDAQ,98.61,98.63
+20160525 13:30:00.129,AAPL,98.62,69,NASDAQ,98.61,98.63
+20160525 13:30:00.129,AAPL,98.62,12,NASDAQ,98.61,98.63
+20160525 13:30:00.129,AAPL,98.62,12,EDGX,98.61,98.63
+20160525 13:30:00.129,AAPL,98.62,100,ARCA,98.61,98.63
+20160525 13:30:00.129,AAPL,98.62,100,ARCA,98.61,98.63
+20160525 13:30:00.130,MSFT,51.95,317,ARCA,51.93,51.95
+20160525 13:30:00.130,MSFT,51.95,283,ARCA,51.93,51.95
+20160525 13:30:00.135,MSFT,51.93,100,EDGX,51.92,51.95
+20160525 13:30:00.135,AAPL,98.62,100,ARCA,98.61,98.62
+20160525 13:30:00.144,AAPL,98.62,12,NASDAQ,98.61,98.62
+20160525 13:30:00.144,AAPL,98.62,88,NASDAQ,98.61,98.62
+20160525 13:30:00.144,AAPL,98.62,162,NASDAQ,98.61,98.62
+20160525 13:30:00.144,AAPL,98.61,100,BATS,98.61,98.62
+20160525 13:30:00.144,AAPL,98.62,61,ARCA,98.61,98.62
+20160525 13:30:00.144,AAPL,98.62,25,ARCA,98.61,98.62
+20160525 13:30:00.144,AAPL,98.62,14,ARCA,98.61,98.62
+20160525 13:30:00.145,AAPL,98.62,12,ARCA,98.6,98.63
+20160525 13:30:00.145,AAPL,98.62,100,ARCA,98.6,98.63
+20160525 13:30:00.145,AAPL,98.63,100,NASDAQ,98.6,98.63
+20160525 13:30:00.145,AAPL,98.63,100,NASDAQ,98.6,98.63
diff --git a/pandas/tools/tests/cut_data.csv b/pandas/tools/tests/data/cut_data.csv
similarity index 100%
rename from pandas/tools/tests/cut_data.csv
rename to pandas/tools/tests/data/cut_data.csv
diff --git a/pandas/tools/tests/data/quotes.csv b/pandas/tools/tests/data/quotes.csv
new file mode 100644
index 0000000000000..3f31d2cfffe1b
--- /dev/null
+++ b/pandas/tools/tests/data/quotes.csv
@@ -0,0 +1,17 @@
+time,ticker,bid,ask
+20160525 13:30:00.023,GOOG,720.50,720.93
+20160525 13:30:00.023,MSFT,51.95,51.95
+20160525 13:30:00.041,MSFT,51.95,51.95
+20160525 13:30:00.048,GOOG,720.50,720.93
+20160525 13:30:00.048,GOOG,720.50,720.93
+20160525 13:30:00.048,GOOG,720.50,720.93
+20160525 13:30:00.048,GOOG,720.50,720.93
+20160525 13:30:00.072,GOOG,720.50,720.88
+20160525 13:30:00.075,AAPL,98.55,98.56
+20160525 13:30:00.076,AAPL,98.55,98.56
+20160525 13:30:00.076,AAPL,98.55,98.56
+20160525 13:30:00.076,AAPL,98.55,98.56
+20160525 13:30:00.078,MSFT,51.95,51.95
+20160525 13:30:00.078,MSFT,51.95,51.95
+20160525 13:30:00.078,MSFT,51.95,51.95
+20160525 13:30:00.078,MSFT,51.92,51.95
diff --git a/pandas/tools/tests/data/quotes2.csv b/pandas/tools/tests/data/quotes2.csv
new file mode 100644
index 0000000000000..7ade1e7faf1ae
--- /dev/null
+++ b/pandas/tools/tests/data/quotes2.csv
@@ -0,0 +1,57 @@
+time,ticker,bid,ask
+20160525 13:30:00.023,GOOG,720.50,720.93
+20160525 13:30:00.023,MSFT,51.95,51.95
+20160525 13:30:00.041,MSFT,51.95,51.95
+20160525 13:30:00.048,GOOG,720.50,720.93
+20160525 13:30:00.048,GOOG,720.50,720.93
+20160525 13:30:00.048,GOOG,720.50,720.93
+20160525 13:30:00.048,GOOG,720.50,720.93
+20160525 13:30:00.072,GOOG,720.50,720.88
+20160525 13:30:00.075,AAPL,98.55,98.56
+20160525 13:30:00.076,AAPL,98.55,98.56
+20160525 13:30:00.076,AAPL,98.55,98.56
+20160525 13:30:00.076,AAPL,98.55,98.56
+20160525 13:30:00.078,MSFT,51.95,51.95
+20160525 13:30:00.078,MSFT,51.95,51.95
+20160525 13:30:00.078,MSFT,51.95,51.95
+20160525 13:30:00.078,MSFT,51.92,51.95
+20160525 13:30:00.079,MSFT,51.92,51.95
+20160525 13:30:00.080,AAPL,98.55,98.56
+20160525 13:30:00.084,AAPL,98.55,98.56
+20160525 13:30:00.086,AAPL,98.55,98.63
+20160525 13:30:00.088,AAPL,98.65,98.63
+20160525 13:30:00.089,AAPL,98.63,98.63
+20160525 13:30:00.104,AAPL,98.63,98.63
+20160525 13:30:00.104,AAPL,98.63,98.63
+20160525 13:30:00.104,AAPL,98.63,98.63
+20160525 13:30:00.104,AAPL,98.63,98.63
+20160525 13:30:00.104,AAPL,98.62,98.63
+20160525 13:30:00.105,AAPL,98.62,98.63
+20160525 13:30:00.107,AAPL,98.62,98.63
+20160525 13:30:00.115,AAPL,98.62,98.63
+20160525 13:30:00.115,AAPL,98.62,98.63
+20160525 13:30:00.118,AAPL,98.62,98.63
+20160525 13:30:00.128,AAPL,98.62,98.63
+20160525 13:30:00.128,AAPL,98.62,98.63
+20160525 13:30:00.129,AAPL,98.62,98.63
+20160525 13:30:00.129,AAPL,98.61,98.63
+20160525 13:30:00.129,AAPL,98.62,98.63
+20160525 13:30:00.129,AAPL,98.62,98.63
+20160525 13:30:00.129,AAPL,98.61,98.63
+20160525 13:30:00.130,MSFT,51.93,51.95
+20160525 13:30:00.130,MSFT,51.93,51.95
+20160525 13:30:00.130,AAPL,98.61,98.63
+20160525 13:30:00.131,AAPL,98.61,98.62
+20160525 13:30:00.131,AAPL,98.61,98.62
+20160525 13:30:00.135,MSFT,51.92,51.95
+20160525 13:30:00.135,AAPL,98.61,98.62
+20160525 13:30:00.136,AAPL,98.61,98.62
+20160525 13:30:00.136,AAPL,98.61,98.62
+20160525 13:30:00.144,AAPL,98.61,98.62
+20160525 13:30:00.144,AAPL,98.61,98.62
+20160525 13:30:00.145,AAPL,98.61,98.62
+20160525 13:30:00.145,AAPL,98.61,98.63
+20160525 13:30:00.145,AAPL,98.61,98.63
+20160525 13:30:00.145,AAPL,98.60,98.63
+20160525 13:30:00.145,AAPL,98.61,98.63
+20160525 13:30:00.145,AAPL,98.60,98.63
diff --git a/pandas/tools/tests/data/tolerance.csv b/pandas/tools/tests/data/tolerance.csv
new file mode 100644
index 0000000000000..d7d061bc46ccc
--- /dev/null
+++ b/pandas/tools/tests/data/tolerance.csv
@@ -0,0 +1,28 @@
+time,ticker,price,quantity,marketCenter,bid,ask
+20160525 13:30:00.023,MSFT,51.95,75,NASDAQ,51.95,51.95
+20160525 13:30:00.038,MSFT,51.95,155,NASDAQ,51.95,51.95
+20160525 13:30:00.048,GOOG,720.77,100,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.92,100,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,200,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,300,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,600,NASDAQ,720.5,720.93
+20160525 13:30:00.048,GOOG,720.93,44,NASDAQ,720.5,720.93
+20160525 13:30:00.074,AAPL,98.67,478343,NASDAQ,,
+20160525 13:30:00.075,AAPL,98.67,478343,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.66,6,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.65,30,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.65,75,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.65,20,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.65,35,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.65,10,NASDAQ,98.55,98.56
+20160525 13:30:00.075,AAPL,98.55,6,ARCA,98.55,98.56
+20160525 13:30:00.075,AAPL,98.55,6,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,1000,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,200,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,300,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,400,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,600,ARCA,98.55,98.56
+20160525 13:30:00.076,AAPL,98.56,200,ARCA,98.55,98.56
+20160525 13:30:00.078,MSFT,51.95,783,NASDAQ,51.92,51.95
+20160525 13:30:00.078,MSFT,51.95,100,NASDAQ,51.92,51.95
+20160525 13:30:00.078,MSFT,51.95,100,NASDAQ,51.92,51.95
diff --git a/pandas/tools/tests/data/trades.csv b/pandas/tools/tests/data/trades.csv
new file mode 100644
index 0000000000000..b26a4ce714255
--- /dev/null
+++ b/pandas/tools/tests/data/trades.csv
@@ -0,0 +1,28 @@
+time,ticker,price,quantity,marketCenter
+20160525 13:30:00.023,MSFT,51.9500,75,NASDAQ
+20160525 13:30:00.038,MSFT,51.9500,155,NASDAQ
+20160525 13:30:00.048,GOOG,720.7700,100,NASDAQ
+20160525 13:30:00.048,GOOG,720.9200,100,NASDAQ
+20160525 13:30:00.048,GOOG,720.9300,200,NASDAQ
+20160525 13:30:00.048,GOOG,720.9300,300,NASDAQ
+20160525 13:30:00.048,GOOG,720.9300,600,NASDAQ
+20160525 13:30:00.048,GOOG,720.9300,44,NASDAQ
+20160525 13:30:00.074,AAPL,98.6700,478343,NASDAQ
+20160525 13:30:00.075,AAPL,98.6700,478343,NASDAQ
+20160525 13:30:00.075,AAPL,98.6600,6,NASDAQ
+20160525 13:30:00.075,AAPL,98.6500,30,NASDAQ
+20160525 13:30:00.075,AAPL,98.6500,75,NASDAQ
+20160525 13:30:00.075,AAPL,98.6500,20,NASDAQ
+20160525 13:30:00.075,AAPL,98.6500,35,NASDAQ
+20160525 13:30:00.075,AAPL,98.6500,10,NASDAQ
+20160525 13:30:00.075,AAPL,98.5500,6,ARCA
+20160525 13:30:00.075,AAPL,98.5500,6,ARCA
+20160525 13:30:00.076,AAPL,98.5600,1000,ARCA
+20160525 13:30:00.076,AAPL,98.5600,200,ARCA
+20160525 13:30:00.076,AAPL,98.5600,300,ARCA
+20160525 13:30:00.076,AAPL,98.5600,400,ARCA
+20160525 13:30:00.076,AAPL,98.5600,600,ARCA
+20160525 13:30:00.076,AAPL,98.5600,200,ARCA
+20160525 13:30:00.078,MSFT,51.9500,783,NASDAQ
+20160525 13:30:00.078,MSFT,51.9500,100,NASDAQ
+20160525 13:30:00.078,MSFT,51.9500,100,NASDAQ
diff --git a/pandas/tools/tests/data/trades2.csv b/pandas/tools/tests/data/trades2.csv
new file mode 100644
index 0000000000000..64021faa68ce3
--- /dev/null
+++ b/pandas/tools/tests/data/trades2.csv
@@ -0,0 +1,78 @@
+time,ticker,price,quantity,marketCenter
+20160525 13:30:00.023,MSFT,51.9500,75,NASDAQ
+20160525 13:30:00.038,MSFT,51.9500,155,NASDAQ
+20160525 13:30:00.048,GOOG,720.7700,100,NASDAQ
+20160525 13:30:00.048,GOOG,720.9200,100,NASDAQ
+20160525 13:30:00.048,GOOG,720.9300,200,NASDAQ
+20160525 13:30:00.048,GOOG,720.9300,300,NASDAQ
+20160525 13:30:00.048,GOOG,720.9300,600,NASDAQ
+20160525 13:30:00.048,GOOG,720.9300,44,NASDAQ
+20160525 13:30:00.074,AAPL,98.6700,478343,NASDAQ
+20160525 13:30:00.075,AAPL,98.6700,478343,NASDAQ
+20160525 13:30:00.075,AAPL,98.6600,6,NASDAQ
+20160525 13:30:00.075,AAPL,98.6500,30,NASDAQ
+20160525 13:30:00.075,AAPL,98.6500,75,NASDAQ
+20160525 13:30:00.075,AAPL,98.6500,20,NASDAQ
+20160525 13:30:00.075,AAPL,98.6500,35,NASDAQ
+20160525 13:30:00.075,AAPL,98.6500,10,NASDAQ
+20160525 13:30:00.075,AAPL,98.5500,6,ARCA
+20160525 13:30:00.075,AAPL,98.5500,6,ARCA
+20160525 13:30:00.076,AAPL,98.5600,1000,ARCA
+20160525 13:30:00.076,AAPL,98.5600,200,ARCA
+20160525 13:30:00.076,AAPL,98.5600,300,ARCA
+20160525 13:30:00.076,AAPL,98.5600,400,ARCA
+20160525 13:30:00.076,AAPL,98.5600,600,ARCA
+20160525 13:30:00.076,AAPL,98.5600,200,ARCA
+20160525 13:30:00.078,MSFT,51.9500,783,NASDAQ
+20160525 13:30:00.078,MSFT,51.9500,100,NASDAQ
+20160525 13:30:00.078,MSFT,51.9500,100,NASDAQ
+20160525 13:30:00.084,AAPL,98.6400,40,NASDAQ
+20160525 13:30:00.084,AAPL,98.5500,149,EDGX
+20160525 13:30:00.086,AAPL,98.5600,500,ARCA
+20160525 13:30:00.104,AAPL,98.6300,647,EDGX
+20160525 13:30:00.104,AAPL,98.6300,300,EDGX
+20160525 13:30:00.104,AAPL,98.6300,50,NASDAQ
+20160525 13:30:00.104,AAPL,98.6300,50,NASDAQ
+20160525 13:30:00.104,AAPL,98.6300,70,NASDAQ
+20160525 13:30:00.104,AAPL,98.6300,70,NASDAQ
+20160525 13:30:00.104,AAPL,98.6300,1,NASDAQ
+20160525 13:30:00.104,AAPL,98.6300,62,NASDAQ
+20160525 13:30:00.104,AAPL,98.6300,10,NASDAQ
+20160525 13:30:00.104,AAPL,98.6300,100,ARCA
+20160525 13:30:00.105,AAPL,98.6300,100,ARCA
+20160525 13:30:00.105,AAPL,98.6300,700,ARCA
+20160525 13:30:00.106,AAPL,98.6300,61,EDGX
+20160525 13:30:00.107,AAPL,98.6300,100,ARCA
+20160525 13:30:00.107,AAPL,98.6300,53,ARCA
+20160525 13:30:00.108,AAPL,98.6300,100,ARCA
+20160525 13:30:00.108,AAPL,98.6300,839,ARCA
+20160525 13:30:00.115,AAPL,98.6300,5,EDGX
+20160525 13:30:00.118,AAPL,98.6300,295,EDGX
+20160525 13:30:00.118,AAPL,98.6300,5,EDGX
+20160525 13:30:00.128,AAPL,98.6300,100,NASDAQ
+20160525 13:30:00.128,AAPL,98.6300,100,NASDAQ
+20160525 13:30:00.128,MSFT,51.9200,100,ARCA
+20160525 13:30:00.129,AAPL,98.6200,100,NASDAQ
+20160525 13:30:00.129,AAPL,98.6200,10,NASDAQ
+20160525 13:30:00.129,AAPL,98.6200,59,NASDAQ
+20160525 13:30:00.129,AAPL,98.6200,31,NASDAQ
+20160525 13:30:00.129,AAPL,98.6200,69,NASDAQ
+20160525 13:30:00.129,AAPL,98.6200,12,NASDAQ
+20160525 13:30:00.129,AAPL,98.6200,12,EDGX
+20160525 13:30:00.129,AAPL,98.6200,100,ARCA
+20160525 13:30:00.129,AAPL,98.6200,100,ARCA
+20160525 13:30:00.130,MSFT,51.9500,317,ARCA
+20160525 13:30:00.130,MSFT,51.9500,283,ARCA
+20160525 13:30:00.135,MSFT,51.9300,100,EDGX
+20160525 13:30:00.135,AAPL,98.6200,100,ARCA
+20160525 13:30:00.144,AAPL,98.6200,12,NASDAQ
+20160525 13:30:00.144,AAPL,98.6200,88,NASDAQ
+20160525 13:30:00.144,AAPL,98.6200,162,NASDAQ
+20160525 13:30:00.144,AAPL,98.6100,100,BATS
+20160525 13:30:00.144,AAPL,98.6200,61,ARCA
+20160525 13:30:00.144,AAPL,98.6200,25,ARCA
+20160525 13:30:00.144,AAPL,98.6200,14,ARCA
+20160525 13:30:00.145,AAPL,98.6200,12,ARCA
+20160525 13:30:00.145,AAPL,98.6200,100,ARCA
+20160525 13:30:00.145,AAPL,98.6300,100,NASDAQ
+20160525 13:30:00.145,AAPL,98.6300,100,NASDAQ
diff --git a/pandas/tools/tests/test_merge_asof.py b/pandas/tools/tests/test_merge_asof.py
new file mode 100644
index 0000000000000..5d78ccf199ed3
--- /dev/null
+++ b/pandas/tools/tests/test_merge_asof.py
@@ -0,0 +1,352 @@
+import nose
+import os
+
+import numpy as np
+import pandas as pd
+from pandas import (merge_asof, read_csv,
+ to_datetime, Timedelta)
+from pandas.tools.merge import MergeError
+from pandas.util import testing as tm
+from pandas.util.testing import assert_frame_equal
+
+
+class TestAsOfMerge(tm.TestCase):
+ _multiprocess_can_split_ = True
+
+ def read_data(self, name, dedupe=False):
+ path = os.path.join(tm.get_data_path(), name)
+ x = read_csv(path)
+ if dedupe:
+ x = (x.drop_duplicates(['time', 'ticker'], keep='last')
+ .reset_index(drop=True)
+ )
+ x.time = to_datetime(x.time)
+ return x
+
+ def setUp(self):
+
+ self.trades = self.read_data('trades.csv')
+ self.quotes = self.read_data('quotes.csv', dedupe=True)
+ self.asof = self.read_data('asof.csv')
+ self.tolerance = self.read_data('tolerance.csv')
+ self.allow_exact_matches = self.read_data('allow_exact_matches.csv')
+ self.allow_exact_matches_and_tolerance = self.read_data(
+ 'allow_exact_matches_and_tolerance.csv')
+
+ def test_examples1(self):
+ """ doc-string examples """
+
+ left = pd.DataFrame({'a': [1, 5, 10],
+ 'left_val': ['a', 'b', 'c']})
+ right = pd.DataFrame({'a': [1, 2, 3, 6, 7],
+ 'right_val': [1, 2, 3, 6, 7]})
+
+ pd.merge_asof(left, right, on='a')
+
+ def test_examples2(self):
+ """ doc-string examples """
+
+ trades = pd.DataFrame({
+ 'time': pd.to_datetime(['20160525 13:30:00.023',
+ '20160525 13:30:00.038',
+ '20160525 13:30:00.048',
+ '20160525 13:30:00.048',
+ '20160525 13:30:00.048']),
+ 'ticker': ['MSFT', 'MSFT',
+ 'GOOG', 'GOOG', 'AAPL'],
+ 'price': [51.95, 51.95,
+ 720.77, 720.92, 98.00],
+ 'quantity': [75, 155,
+ 100, 100, 100]},
+ columns=['time', 'ticker', 'price', 'quantity'])
+
+ quotes = pd.DataFrame({
+ 'time': pd.to_datetime(['20160525 13:30:00.023',
+ '20160525 13:30:00.023',
+ '20160525 13:30:00.030',
+ '20160525 13:30:00.041',
+ '20160525 13:30:00.048',
+ '20160525 13:30:00.049',
+ '20160525 13:30:00.072',
+ '20160525 13:30:00.075']),
+ 'ticker': ['GOOG', 'MSFT', 'MSFT',
+ 'MSFT', 'GOOG', 'AAPL', 'GOOG',
+ 'MSFT'],
+ 'bid': [720.50, 51.95, 51.97, 51.99,
+ 720.50, 97.99, 720.50, 52.01],
+ 'ask': [720.93, 51.96, 51.98, 52.00,
+ 720.93, 98.01, 720.88, 52.03]},
+ columns=['time', 'ticker', 'bid', 'ask'])
+
+ pd.merge_asof(trades, quotes,
+ on='time',
+ by='ticker')
+
+ pd.merge_asof(trades, quotes,
+ on='time',
+ by='ticker',
+ tolerance=pd.Timedelta('2ms'))
+
+ pd.merge_asof(trades, quotes,
+ on='time',
+ by='ticker',
+ tolerance=pd.Timedelta('10ms'),
+ allow_exact_matches=False)
+
+ def test_basic(self):
+
+ expected = self.asof
+ trades = self.trades
+ quotes = self.quotes
+
+ result = merge_asof(trades, quotes,
+ on='time',
+ by='ticker')
+ assert_frame_equal(result, expected)
+
+ def test_basic_categorical(self):
+
+ expected = self.asof
+ trades = self.trades.copy()
+ trades.ticker = trades.ticker.astype('category')
+ quotes = self.quotes.copy()
+ quotes.ticker = quotes.ticker.astype('category')
+
+ result = merge_asof(trades, quotes,
+ on='time',
+ by='ticker')
+ assert_frame_equal(result, expected)
+
+ def test_missing_right_by(self):
+
+ expected = self.asof
+ trades = self.trades
+ quotes = self.quotes
+
+ q = quotes[quotes.ticker != 'MSFT']
+ result = merge_asof(trades, q,
+ on='time',
+ by='ticker')
+ expected.loc[expected.ticker == 'MSFT', ['bid', 'ask']] = np.nan
+ assert_frame_equal(result, expected)
+
+ def test_basic2(self):
+
+ expected = self.read_data('asof2.csv')
+ trades = self.read_data('trades2.csv')
+ quotes = self.read_data('quotes2.csv', dedupe=True)
+
+ result = merge_asof(trades, quotes,
+ on='time',
+ by='ticker')
+ assert_frame_equal(result, expected)
+
+ def test_basic_no_by(self):
+ f = lambda x: x[x.ticker == 'MSFT'].drop('ticker', axis=1) \
+ .reset_index(drop=True)
+
+ # just use a single ticker
+ expected = f(self.asof)
+ trades = f(self.trades)
+ quotes = f(self.quotes)
+
+ result = merge_asof(trades, quotes,
+ on='time')
+ assert_frame_equal(result, expected)
+
+ def test_valid_join_keys(self):
+
+ trades = self.trades
+ quotes = self.quotes
+
+ with self.assertRaises(MergeError):
+ merge_asof(trades, quotes,
+ left_on='time',
+ right_on='bid',
+ by='ticker')
+
+ with self.assertRaises(MergeError):
+ merge_asof(trades, quotes,
+ on=['time', 'ticker'],
+ by='ticker')
+
+ with self.assertRaises(MergeError):
+ merge_asof(trades, quotes,
+ by='ticker')
+
+ def test_with_duplicates(self):
+
+ q = pd.concat([self.quotes, self.quotes]).sort_values(
+ ['time', 'ticker']).reset_index(drop=True)
+ result = merge_asof(self.trades, q,
+ on='time',
+ by='ticker')
+ expected = self.read_data('asof.csv')
+ assert_frame_equal(result, expected)
+
+ result = merge_asof(self.trades, q,
+ on='time',
+ by='ticker',
+ check_duplicates=False)
+ expected = self.read_data('asof.csv')
+ expected = pd.concat([expected, expected]).sort_values(
+ ['time', 'ticker']).reset_index(drop=True)
+
+ # the results are not ordered in a meaningful way
+ # nor are the exact matches duplicated, so comparisons
+ # are pretty tricky here, however the uniques are the same
+
+ def aligner(x, ticker):
+ return (x[x.ticker == ticker]
+ .sort_values(['time', 'ticker', 'quantity', 'price',
+ 'marketCenter', 'bid', 'ask'])
+ .drop_duplicates(keep='last')
+ .reset_index(drop=True)
+ )
+
+ for ticker in expected.ticker.unique():
+ r = aligner(result, ticker)
+ e = aligner(expected, ticker)
+ assert_frame_equal(r, e)
+
+ def test_with_duplicates_no_on(self):
+
+ df1 = pd.DataFrame({'key': [1, 1, 3],
+ 'left_val': [1, 2, 3]})
+ df2 = pd.DataFrame({'key': [1, 3, 3],
+ 'right_val': [1, 2, 3]})
+ result = merge_asof(df1, df2, on='key', check_duplicates=False)
+ expected = pd.DataFrame({'key': [1, 1, 3, 3],
+ 'left_val': [1, 2, 3, 3],
+ 'right_val': [1, 1, 2, 3]})
+ assert_frame_equal(result, expected)
+
+ df1 = pd.DataFrame({'key': [1, 1, 3],
+ 'left_val': [1, 2, 3]})
+ df2 = pd.DataFrame({'key': [1, 2, 2],
+ 'right_val': [1, 2, 3]})
+ result = merge_asof(df1, df2, on='key')
+ expected = pd.DataFrame({'key': [1, 1, 3],
+ 'left_val': [1, 2, 3],
+ 'right_val': [1, 1, 3]})
+ assert_frame_equal(result, expected)
+
+ def test_valid_allow_exact_matches(self):
+
+ trades = self.trades
+ quotes = self.quotes
+
+ with self.assertRaises(MergeError):
+ merge_asof(trades, quotes,
+ on='time',
+ by='ticker',
+ allow_exact_matches='foo')
+
+ def test_valid_tolerance(self):
+
+ trades = self.trades
+ quotes = self.quotes
+
+ # dti
+ merge_asof(trades, quotes,
+ on='time',
+ by='ticker',
+ tolerance=Timedelta('1s'))
+
+ # integer
+ merge_asof(trades.reset_index(), quotes.reset_index(),
+ on='index',
+ by='ticker',
+ tolerance=1)
+
+ # incompat
+ with self.assertRaises(MergeError):
+ merge_asof(trades, quotes,
+ on='time',
+ by='ticker',
+ tolerance=1)
+
+ # invalid
+ with self.assertRaises(MergeError):
+ merge_asof(trades.reset_index(), quotes.reset_index(),
+ on='index',
+ by='ticker',
+ tolerance=1.0)
+
+ # invalid negative
+ with self.assertRaises(MergeError):
+ merge_asof(trades, quotes,
+ on='time',
+ by='ticker',
+ tolerance=-Timedelta('1s'))
+
+ with self.assertRaises(MergeError):
+ merge_asof(trades.reset_index(), quotes.reset_index(),
+ on='index',
+ by='ticker',
+ tolerance=-1)
+
+ def test_non_sorted(self):
+
+ trades = self.trades.sort_values('time', ascending=False)
+ quotes = self.quotes.sort_values('time', ascending=False)
+
+ # we require that we are already sorted on time & quotes
+ self.assertFalse(trades.time.is_monotonic)
+ self.assertFalse(quotes.time.is_monotonic)
+ with self.assertRaises(ValueError):
+ merge_asof(trades, quotes,
+ on='time',
+ by='ticker')
+
+ trades = self.trades.sort_values('time')
+ self.assertTrue(trades.time.is_monotonic)
+ self.assertFalse(quotes.time.is_monotonic)
+ with self.assertRaises(ValueError):
+ merge_asof(trades, quotes,
+ on='time',
+ by='ticker')
+
+ quotes = self.quotes.sort_values('time')
+ self.assertTrue(trades.time.is_monotonic)
+ self.assertTrue(quotes.time.is_monotonic)
+
+ # ok, though has dupes
+ merge_asof(trades, self.quotes,
+ on='time',
+ by='ticker')
+
+ def test_tolerance(self):
+
+ trades = self.trades
+ quotes = self.quotes
+
+ result = merge_asof(trades, quotes,
+ on='time',
+ by='ticker',
+ tolerance=Timedelta('1day'))
+ expected = self.tolerance
+ assert_frame_equal(result, expected)
+
+ def test_allow_exact_matches(self):
+
+ result = merge_asof(self.trades, self.quotes,
+ on='time',
+ by='ticker',
+ allow_exact_matches=False)
+ expected = self.allow_exact_matches
+ assert_frame_equal(result, expected)
+
+ def test_allow_exact_matches_and_tolerance(self):
+
+ result = merge_asof(self.trades, self.quotes,
+ on='time',
+ by='ticker',
+ tolerance=Timedelta('100ms'),
+ allow_exact_matches=False)
+ expected = self.allow_exact_matches_and_tolerance
+ assert_frame_equal(result, expected)
+
+if __name__ == '__main__':
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
+ exit=False)
diff --git a/pandas/tools/tests/test_ordered_merge.py b/pandas/tools/tests/test_merge_ordered.py
similarity index 85%
rename from pandas/tools/tests/test_ordered_merge.py
rename to pandas/tools/tests/test_merge_ordered.py
index 53f00d9761f32..0511a0ca6d1cf 100644
--- a/pandas/tools/tests/test_ordered_merge.py
+++ b/pandas/tools/tests/test_merge_ordered.py
@@ -1,7 +1,7 @@
import nose
import pandas as pd
-from pandas import DataFrame, ordered_merge
+from pandas import DataFrame, merge_ordered
from pandas.util import testing as tm
from pandas.util.testing import assert_frame_equal
@@ -17,10 +17,15 @@ def setUp(self):
self.right = DataFrame({'key': ['b', 'c', 'd', 'f'],
'rvalue': [1, 2, 3., 4]})
+ def test_deprecation(self):
+
+ with tm.assert_produces_warning(FutureWarning):
+ pd.ordered_merge(self.left, self.right, on='key')
+
# GH #813
def test_basic(self):
- result = ordered_merge(self.left, self.right, on='key')
+ result = merge_ordered(self.left, self.right, on='key')
expected = DataFrame({'key': ['a', 'b', 'c', 'd', 'e', 'f'],
'lvalue': [1, nan, 2, nan, 3, nan],
'rvalue': [nan, 1, 2, 3, nan, 4]})
@@ -28,7 +33,7 @@ def test_basic(self):
assert_frame_equal(result, expected)
def test_ffill(self):
- result = ordered_merge(
+ result = merge_ordered(
self.left, self.right, on='key', fill_method='ffill')
expected = DataFrame({'key': ['a', 'b', 'c', 'd', 'e', 'f'],
'lvalue': [1., 1, 2, 2, 3, 3.],
@@ -42,7 +47,7 @@ def test_multigroup(self):
left['group'] = ['a'] * 3 + ['b'] * 3
# right['group'] = ['a'] * 4 + ['b'] * 4
- result = ordered_merge(left, self.right, on='key', left_by='group',
+ result = merge_ordered(left, self.right, on='key', left_by='group',
fill_method='ffill')
expected = DataFrame({'key': ['a', 'b', 'c', 'd', 'e', 'f'] * 2,
'lvalue': [1., 1, 2, 2, 3, 3.] * 2,
@@ -51,11 +56,11 @@ def test_multigroup(self):
assert_frame_equal(result, expected.ix[:, result.columns])
- result2 = ordered_merge(self.right, left, on='key', right_by='group',
+ result2 = merge_ordered(self.right, left, on='key', right_by='group',
fill_method='ffill')
assert_frame_equal(result, result2.ix[:, result.columns])
- result = ordered_merge(left, self.right, on='key', left_by='group')
+ result = merge_ordered(left, self.right, on='key', left_by='group')
self.assertTrue(result['group'].notnull().all())
def test_merge_type(self):
diff --git a/pandas/tools/tests/test_tile.py b/pandas/tools/tests/test_tile.py
index 0b91fd1ef1c02..bb5429b5e8836 100644
--- a/pandas/tools/tests/test_tile.py
+++ b/pandas/tools/tests/test_tile.py
@@ -216,8 +216,7 @@ def test_label_formatting(self):
def test_qcut_binning_issues(self):
# #1978, 1979
- path = os.path.join(curpath(), 'cut_data.csv')
-
+ path = os.path.join(tm.get_data_path(), 'cut_data.csv')
arr = np.loadtxt(path)
result = qcut(arr, 20)
diff --git a/setup.py b/setup.py
index 1d189364239a9..adea92896d382 100755
--- a/setup.py
+++ b/setup.py
@@ -591,6 +591,7 @@ def pxd(name):
'tests/data/*.xlsx',
'tests/data/*.xlsm',
'tests/data/*.table',
+ 'tests/tools/data/*.csv',
'tests/parser/data/*.csv',
'tests/parser/data/*.gz',
'tests/parser/data/*.bz2',
| closes #1870
xref #2941
[Here](http://nbviewer.jupyter.org/gist/jreback/5f089d308750c89b2a7d7446b790c056) is a notebook of example usage and timings
| https://api.github.com/repos/pandas-dev/pandas/pulls/13358 | 2016-06-03T19:34:49Z | 2016-06-17T00:09:32Z | null | 2016-06-17T00:10:57Z |
BUG: revert assert_numpy_array_equal to before c2ea8fb2 (accept non-ndarrays) | diff --git a/pandas/tests/test_testing.py b/pandas/tests/test_testing.py
index c4e864a909c03..0ad0930862866 100644
--- a/pandas/tests/test_testing.py
+++ b/pandas/tests/test_testing.py
@@ -186,7 +186,7 @@ def test_numpy_array_equal_message(self):
assert_almost_equal(np.array([1, 2]), np.array([3, 4, 5]))
# scalar comparison
- expected = """Expected type """
+ expected = """: 1 != 2"""
with assertRaisesRegexp(AssertionError, expected):
assert_numpy_array_equal(1, 2)
expected = """expected 2\\.00000 but got 1\\.00000, with decimal 5"""
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 03ccfcab24f58..b457b8546e7ba 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -30,6 +30,7 @@
needs_i8_conversion, is_categorical_dtype)
from pandas.formats.printing import pprint_thing
from pandas.core.algorithms import take_1d
+from pandas.lib import isscalar
import pandas.compat as compat
from pandas.compat import(
@@ -1009,29 +1010,33 @@ def assert_numpy_array_equal(left, right, strict_nan=False,
assertion message
"""
- # instance validation
- # to show a detailed erorr message when classes are different
- assert_class_equal(left, right, obj=obj)
- # both classes must be an np.ndarray
- assertIsInstance(left, np.ndarray, '[ndarray] ')
- assertIsInstance(right, np.ndarray, '[ndarray] ')
-
def _raise(left, right, err_msg):
if err_msg is None:
- if left.shape != right.shape:
- raise_assert_detail(obj, '{0} shapes are different'
- .format(obj), left.shape, right.shape)
-
- diff = 0
- for l, r in zip(left, right):
- # count up differences
- if not array_equivalent(l, r, strict_nan=strict_nan):
- diff += 1
-
- diff = diff * 100.0 / left.size
- msg = '{0} values are different ({1} %)'\
- .format(obj, np.round(diff, 5))
- raise_assert_detail(obj, msg, left, right)
+ # show detailed error
+ if isscalar(left) and isscalar(right):
+ # show scalar comparison error
+ assert_equal(left, right)
+ elif is_list_like(left) and is_list_like(right):
+ # some test cases pass list
+ left = np.asarray(left)
+ right = np.array(right)
+
+ if left.shape != right.shape:
+ raise_assert_detail(obj, '{0} shapes are different'
+ .format(obj), left.shape, right.shape)
+
+ diff = 0
+ for l, r in zip(left, right):
+ # count up differences
+ if not array_equivalent(l, r, strict_nan=strict_nan):
+ diff += 1
+
+ diff = diff * 100.0 / left.size
+ msg = '{0} values are different ({1} %)'\
+ .format(obj, np.round(diff, 5))
+ raise_assert_detail(obj, msg, left, right)
+ else:
+ assert_class_equal(left, right, obj=obj)
raise AssertionError(err_msg)
| - [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
`assert_numpy_array_equal`'s docstring is at odds with (and the existence of the `obj` argument clashes with) the fact that it's now strict on its arguments being ndrarrays.
There are [practical reasons](https://github.com/pydata/pandas/pull/13205#discussion_r65358471) to allow other kinds of arguments.
On the other hand, I don't see any reason for being strict, but I assume @sinhrks 's [PR](https://github.com/pydata/pandas/pull/13311) had some rationale behind: so a possible solution is an argument `ensure_ndarray=True` which restores the previous behaviour when set to `False`.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13355 | 2016-06-03T10:38:00Z | 2016-06-14T14:39:18Z | null | 2016-06-14T14:39:18Z |
ENH: Adding json line parsing to pd.read_json #9180 | diff --git a/doc/source/io.rst b/doc/source/io.rst
index da0444a8b8df9..58a3d03a9b73a 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1466,6 +1466,7 @@ with optional parameters:
- ``force_ascii`` : force encoded string to be ASCII, default True.
- ``date_unit`` : The time unit to encode to, governs timestamp and ISO8601 precision. One of 's', 'ms', 'us' or 'ns' for seconds, milliseconds, microseconds and nanoseconds respectively. Default 'ms'.
- ``default_handler`` : The handler to call if an object cannot otherwise be converted to a suitable format for JSON. Takes a single argument, which is the object to convert, and returns a serializable object.
+- ``lines`` : If ``records`` orient, then will write each record per line as json.
Note ``NaN``'s, ``NaT``'s and ``None`` will be converted to ``null`` and ``datetime`` objects will be converted based on the ``date_format`` and ``date_unit`` parameters.
@@ -1656,6 +1657,8 @@ is ``None``. To explicitly force ``Series`` parsing, pass ``typ=series``
None. By default the timestamp precision will be detected, if this is not desired
then pass one of 's', 'ms', 'us' or 'ns' to force timestamp precision to
seconds, milliseconds, microseconds or nanoseconds respectively.
+- ``lines`` : reads file as one json object per line.
+- ``encoding`` : The encoding to use to decode py3 bytes.
The parser will raise one of ``ValueError/TypeError/AssertionError`` if the JSON is not parseable.
@@ -1845,6 +1848,26 @@ into a flat table.
json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']])
+.. _io.jsonl:
+
+Line delimited json
+'''''''''''''''''''
+
+.. versionadded:: 0.19.0
+
+pandas is able to read and write line-delimited json files that are common in data processing pipelines
+using Hadoop or Spark.
+
+.. ipython:: python
+
+ jsonl = '''
+ {"a":1,"b":2}
+ {"a":3,"b":4}
+ '''
+ df = pd.read_json(jsonl, lines=True)
+ df
+ df.to_json(orient='records', lines=True)
+
HTML
----
diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt
index f65f7d57d5d08..f549d7361ea5f 100644
--- a/doc/source/whatsnew/v0.19.0.txt
+++ b/doc/source/whatsnew/v0.19.0.txt
@@ -254,6 +254,7 @@ Other enhancements
.. _whatsnew_0190.api:
+
API changes
~~~~~~~~~~~
@@ -271,7 +272,7 @@ API changes
- ``__setitem__`` will no longer apply a callable rhs as a function instead of storing it. Call ``where`` directly to get the previous behavior. (:issue:`13299`)
- Passing ``Period`` with multiple frequencies to normal ``Index`` now returns ``Index`` with ``object`` dtype (:issue:`13664`)
- ``PeriodIndex.fillna`` with ``Period`` has different freq now coerces to ``object`` dtype (:issue:`13664`)
-
+- The ``pd.read_json`` and ``DataFrame.to_json`` has gained support for reading and writing json lines with ``lines`` option see :ref:`Line delimited json <io.jsonl>` (:issue:`9180`)
.. _whatsnew_0190.api.tolist:
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 6c1676fbdd7f4..cf5e99bd52993 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1016,7 +1016,7 @@ def __setstate__(self, state):
def to_json(self, path_or_buf=None, orient=None, date_format='epoch',
double_precision=10, force_ascii=True, date_unit='ms',
- default_handler=None):
+ default_handler=None, lines=False):
"""
Convert the object to a JSON string.
@@ -1064,6 +1064,13 @@ def to_json(self, path_or_buf=None, orient=None, date_format='epoch',
Handler to call if object cannot otherwise be converted to a
suitable format for JSON. Should receive a single argument which is
the object to convert and return a serialisable object.
+ lines : boolean, defalut False
+ If 'orient' is 'records' write out line delimited json format. Will
+ throw ValueError if incorrect 'orient' since others are not list
+ like.
+
+ .. versionadded:: 0.19.0
+
Returns
-------
@@ -1076,7 +1083,8 @@ def to_json(self, path_or_buf=None, orient=None, date_format='epoch',
date_format=date_format,
double_precision=double_precision,
force_ascii=force_ascii, date_unit=date_unit,
- default_handler=default_handler)
+ default_handler=default_handler,
+ lines=lines)
def to_hdf(self, path_or_buf, key, **kwargs):
"""Activate the HDFStore.
diff --git a/pandas/io/json.py b/pandas/io/json.py
index fd97e51208f7e..5d937856ae06d 100644
--- a/pandas/io/json.py
+++ b/pandas/io/json.py
@@ -7,22 +7,49 @@
import pandas.json as _json
from pandas.tslib import iNaT
-from pandas.compat import long, u
+from pandas.compat import StringIO, long, u
from pandas import compat, isnull
from pandas import Series, DataFrame, to_datetime
-from pandas.io.common import get_filepath_or_buffer
+from pandas.io.common import get_filepath_or_buffer, _get_handle
from pandas.core.common import AbstractMethodError
from pandas.formats.printing import pprint_thing
loads = _json.loads
dumps = _json.dumps
+
# interface to/from
+def _convert_to_line_delimits(s):
+ """Helper function that converts json lists to line delimited json."""
+
+ # Determine we have a JSON list to turn to lines otherwise just return the
+ # json object, only lists can
+ if not s[0] == '[' and s[-1] == ']':
+ return s
+ s = s[1:-1]
+ num_open_brackets_seen = 0
+ commas_to_replace = []
+ for idx, char in enumerate(s): # iter through to find all
+ if char == ',': # commas that should be \n
+ if num_open_brackets_seen == 0:
+ commas_to_replace.append(idx)
+ elif char == '{':
+ num_open_brackets_seen += 1
+ elif char == '}':
+ num_open_brackets_seen -= 1
+ s_arr = np.array(list(s)) # Turn to an array to set
+ s_arr[commas_to_replace] = '\n' # all commas at once.
+ s = ''.join(s_arr)
+ return s
def to_json(path_or_buf, obj, orient=None, date_format='epoch',
double_precision=10, force_ascii=True, date_unit='ms',
- default_handler=None):
+ default_handler=None, lines=False):
+
+ if lines and orient != 'records':
+ raise ValueError(
+ "'lines' keyword only valid when 'orient' is records")
if isinstance(obj, Series):
s = SeriesWriter(
@@ -37,6 +64,9 @@ def to_json(path_or_buf, obj, orient=None, date_format='epoch',
else:
raise NotImplementedError("'obj' should be a Series or a DataFrame")
+ if lines:
+ s = _convert_to_line_delimits(s)
+
if isinstance(path_or_buf, compat.string_types):
with open(path_or_buf, 'w') as fh:
fh.write(s)
@@ -105,7 +135,8 @@ def _format_axes(self):
def read_json(path_or_buf=None, orient=None, typ='frame', dtype=True,
convert_axes=True, convert_dates=True, keep_default_dates=True,
- numpy=False, precise_float=False, date_unit=None):
+ numpy=False, precise_float=False, date_unit=None, encoding=None,
+ lines=False):
"""
Convert a JSON string to pandas object
@@ -178,13 +209,23 @@ def read_json(path_or_buf=None, orient=None, typ='frame', dtype=True,
is to try and detect the correct precision, but if this is not desired
then pass one of 's', 'ms', 'us' or 'ns' to force parsing only seconds,
milliseconds, microseconds or nanoseconds respectively.
+ lines : boolean, default False
+ Read the file as a json object per line.
+
+ .. versionadded:: 0.19.0
+
+ encoding : str, default is 'utf-8'
+ The encoding to use to decode py3 bytes.
+
+ .. versionadded:: 0.19.0
Returns
-------
result : Series or DataFrame
"""
- filepath_or_buffer, _, _ = get_filepath_or_buffer(path_or_buf)
+ filepath_or_buffer, _, _ = get_filepath_or_buffer(path_or_buf,
+ encoding=encoding)
if isinstance(filepath_or_buffer, compat.string_types):
try:
exists = os.path.exists(filepath_or_buffer)
@@ -195,7 +236,7 @@ def read_json(path_or_buf=None, orient=None, typ='frame', dtype=True,
exists = False
if exists:
- with open(filepath_or_buffer, 'r') as fh:
+ with _get_handle(filepath_or_buffer, 'r', encoding=encoding) as fh:
json = fh.read()
else:
json = filepath_or_buffer
@@ -204,6 +245,12 @@ def read_json(path_or_buf=None, orient=None, typ='frame', dtype=True,
else:
json = filepath_or_buffer
+ if lines:
+ # If given a json lines file, we break the string into lines, add
+ # commas and put it in a json list to make a valid json object.
+ lines = list(StringIO(json.strip()))
+ json = u'[' + u','.join(lines) + u']'
+
obj = None
if typ == 'frame':
obj = FrameParser(json, orient, dtype, convert_axes, convert_dates,
diff --git a/pandas/io/tests/json/test_pandas.py b/pandas/io/tests/json/test_pandas.py
index 9f8aedc2e399e..6516ced7b5fb7 100644
--- a/pandas/io/tests/json/test_pandas.py
+++ b/pandas/io/tests/json/test_pandas.py
@@ -948,6 +948,58 @@ def test_tz_range_is_utc(self):
df = DataFrame({'DT': dti})
self.assertEqual(dfexp, pd.json.dumps(df, iso_dates=True))
+ def test_read_jsonl(self):
+ # GH9180
+ result = read_json('{"a": 1, "b": 2}\n{"b":2, "a" :1}\n', lines=True)
+ expected = DataFrame([[1, 2], [1, 2]], columns=['a', 'b'])
+ assert_frame_equal(result, expected)
+
+ def test_to_jsonl(self):
+ # GH9180
+ df = DataFrame([[1, 2], [1, 2]], columns=['a', 'b'])
+ result = df.to_json(orient="records", lines=True)
+ expected = '{"a":1,"b":2}\n{"a":1,"b":2}'
+ self.assertEqual(result, expected)
+
+ def test_latin_encoding(self):
+ if compat.PY2:
+ self.assertRaisesRegexp(
+ TypeError, '\[unicode\] is not implemented as a table column')
+ return
+
+ values = [[b'E\xc9, 17', b'', b'a', b'b', b'c'],
+ [b'E\xc9, 17', b'a', b'b', b'c'],
+ [b'EE, 17', b'', b'a', b'b', b'c'],
+ [b'E\xc9, 17', b'\xf8\xfc', b'a', b'b', b'c'],
+ [b'', b'a', b'b', b'c'],
+ [b'\xf8\xfc', b'a', b'b', b'c'],
+ [b'A\xf8\xfc', b'', b'a', b'b', b'c'],
+ [np.nan, b'', b'b', b'c'],
+ [b'A\xf8\xfc', np.nan, b'', b'b', b'c']]
+
+ def _try_decode(x, encoding='latin-1'):
+ try:
+ return x.decode(encoding)
+ except AttributeError:
+ return x
+
+ # not sure how to remove latin-1 from code in python 2 and 3
+ values = [[_try_decode(x) for x in y] for y in values]
+
+ examples = []
+ for dtype in ['category', object]:
+ for val in values:
+ examples.append(Series(val, dtype=dtype))
+
+ def roundtrip(s, encoding='latin-1'):
+ with ensure_clean('test.json') as path:
+ s.to_json(path, encoding=encoding)
+ retr = read_json(path, encoding=encoding)
+ assert_series_equal(s, retr, check_categorical=False)
+
+ for s in examples:
+ roundtrip(s)
+
if __name__ == '__main__':
import nose
| - [x] closes #9180
- [x] closed #13356
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/13351 | 2016-06-02T23:56:55Z | 2016-07-24T14:14:06Z | null | 2016-07-24T15:37:50Z |
BUG: DataFrame.to_string with formatters, header and index False | https://github.com/pandas-dev/pandas/pull/13350.diff | closes #13032
- [x] tests added / passed - added test specific to format bug
- [x] passes `pep8radius master --diff`
- [x] whatsnew entry - not needed
Found this bug experimenting with formatters. First pull request to pandas, but I believe guidelines are quite clear. I can explain what was happening in more detail if that is necessary.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13350 | 2016-06-02T23:15:57Z | 2016-09-10T02:25:40Z | null | 2023-05-11T01:13:39Z |
DOC: fix comment on previous versions cythonmagic | diff --git a/doc/source/enhancingperf.rst b/doc/source/enhancingperf.rst
index a4db4b7c0d953..685a8690a53d5 100644
--- a/doc/source/enhancingperf.rst
+++ b/doc/source/enhancingperf.rst
@@ -95,7 +95,7 @@ Plain cython
~~~~~~~~~~~~
First we're going to need to import the cython magic function to ipython (for
-cython versions >=0.21 you can use ``%load_ext Cython``):
+cython versions < 0.21 you can use ``%load_ext cythonmagic``):
.. ipython:: python
:okwarning:
| Small thing I just noticed in the docs (the note on the other version was not updated when the example was changed from cythonmagic -> Cython)
| https://api.github.com/repos/pandas-dev/pandas/pulls/13343 | 2016-06-01T13:47:59Z | 2016-06-02T17:12:43Z | null | 2016-06-02T17:12:43Z |
Cleanup compression | diff --git a/pandas/io/common.py b/pandas/io/common.py
index 127ebc4839fd3..b65eebe3f6a9a 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -285,53 +285,84 @@ def ZipFile(*args, **kwargs):
ZipFile = zipfile.ZipFile
-def _get_handle(path, mode, encoding=None, compression=None, memory_map=False):
+def _get_handle(source, mode, encoding=None, compression=None, memory_map=False):
"""Gets file handle for given path and mode.
"""
- if compression is not None:
- if encoding is not None and not compat.PY3:
+
+ f = source
+ is_path = isinstance(source, compat.string_types)
+
+ # in Python 3, convert BytesIO or fileobjects passed with an encoding
+ if compat.PY3 and isinstance(source, compat.BytesIO):
+ from io import TextIOWrapper
+
+ return TextIOWrapper(source, encoding=encoding)
+
+ elif compression is not None:
+ compression = compression.lower()
+ if encoding is not None and not compat.PY3 and not is_path:
msg = 'encoding + compression not yet supported in Python 2'
raise ValueError(msg)
+ # GZ Compression
if compression == 'gzip':
import gzip
- f = gzip.GzipFile(path, mode)
+
+ f = gzip.GzipFile(source, mode) \
+ if is_path else gzip.GzipFile(fileobj=source)
+
+ # BZ Compression
elif compression == 'bz2':
import bz2
- f = bz2.BZ2File(path, mode)
+
+ if is_path:
+ f = bz2.BZ2File(source, mode)
+
+ else:
+ f = bz2.BZ2File(source) if compat.PY3 else StringIO(
+ bz2.decompress(source.read()))
+ # Python 2's bz2 module can't take file objects, so have to
+ # run through decompress manually
+
+ # ZIP Compression
elif compression == 'zip':
import zipfile
- zip_file = zipfile.ZipFile(path)
+ zip_file = zipfile.ZipFile(source)
zip_names = zip_file.namelist()
if len(zip_names) == 1:
- file_name = zip_names.pop()
- f = zip_file.open(file_name)
+ f = zip_file.open(zip_names.pop())
elif len(zip_names) == 0:
raise ValueError('Zero files found in ZIP file {}'
- .format(path))
+ .format(source))
else:
raise ValueError('Multiple files found in ZIP file.'
' Only one file per ZIP :{}'
.format(zip_names))
+
+ # XZ Compression
elif compression == 'xz':
lzma = compat.import_lzma()
- f = lzma.LZMAFile(path, mode)
+ f = lzma.LZMAFile(source, mode)
+
else:
- raise ValueError('Unrecognized compression type: %s' %
- compression)
+ raise ValueError('Unrecognized compression: %s' % compression)
+
if compat.PY3:
from io import TextIOWrapper
+
f = TextIOWrapper(f, encoding=encoding)
+
return f
- else:
+
+ elif is_path:
if compat.PY3:
if encoding:
- f = open(path, mode, encoding=encoding)
+ f = open(source, mode, encoding=encoding)
else:
- f = open(path, mode, errors='replace')
+ f = open(source, mode, errors='replace')
else:
- f = open(path, mode)
+ f = open(source, mode)
if memory_map and hasattr(f, 'fileno'):
try:
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 93c431531355a..b82941ddcb997 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -1785,20 +1785,10 @@ def __init__(self, f, **kwds):
self.comment = kwds['comment']
self._comment_lines = []
- if isinstance(f, compat.string_types):
- f = _get_handle(f, 'r', encoding=self.encoding,
- compression=self.compression,
- memory_map=self.memory_map)
- self.handles.append(f)
- elif self.compression:
- f = _wrap_compressed(f, self.compression, self.encoding)
- self.handles.append(f)
- # in Python 3, convert BytesIO or fileobjects passed with an encoding
- elif compat.PY3 and isinstance(f, compat.BytesIO):
- from io import TextIOWrapper
-
- f = TextIOWrapper(f, encoding=self.encoding)
- self.handles.append(f)
+ f = _get_handle(f, 'r', encoding=self.encoding,
+ compression=self.compression,
+ memory_map=self.memory_map)
+ self.handles.append(f)
# Set self.data to something that can read lines.
if hasattr(f, 'readline'):
| - [x] closes #12688
- [x] tests passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry - not needed
| https://api.github.com/repos/pandas-dev/pandas/pulls/13340 | 2016-06-01T04:00:04Z | 2016-12-13T23:11:38Z | null | 2016-12-13T23:12:40Z |
TST: computation/test_eval.py tests (slow) | diff --git a/pandas/computation/tests/test_eval.py b/pandas/computation/tests/test_eval.py
index 4dc1e24618a83..5019dd392a567 100644
--- a/pandas/computation/tests/test_eval.py
+++ b/pandas/computation/tests/test_eval.py
@@ -248,7 +248,8 @@ def check_operands(left, right, cmp_op):
for ex in (ex1, ex2, ex3):
result = pd.eval(ex, engine=self.engine,
parser=self.parser)
- tm.assert_numpy_array_equal(result, expected)
+
+ tm.assert_almost_equal(result, expected)
def check_simple_cmp_op(self, lhs, cmp1, rhs):
ex = 'lhs {0} rhs'.format(cmp1)
@@ -265,7 +266,8 @@ def check_binary_arith_op(self, lhs, arith1, rhs):
ex = 'lhs {0} rhs'.format(arith1)
result = pd.eval(ex, engine=self.engine, parser=self.parser)
expected = _eval_single_bin(lhs, arith1, rhs, self.engine)
- tm.assert_numpy_array_equal(result, expected)
+
+ tm.assert_almost_equal(result, expected)
ex = 'lhs {0} rhs {0} rhs'.format(arith1)
result = pd.eval(ex, engine=self.engine, parser=self.parser)
nlhs = _eval_single_bin(lhs, arith1, rhs,
@@ -280,8 +282,10 @@ def check_alignment(self, result, nlhs, ghs, op):
# TypeError, AttributeError: series or frame with scalar align
pass
else:
+
+ # direct numpy comparison
expected = self.ne.evaluate('nlhs {0} ghs'.format(op))
- tm.assert_numpy_array_equal(result, expected)
+ tm.assert_numpy_array_equal(result.values, expected)
# modulus, pow, and floor division require special casing
@@ -349,12 +353,12 @@ def check_single_invert_op(self, lhs, cmp1, rhs):
elb = np.array([bool(el)])
expected = ~elb
result = pd.eval('~elb', engine=self.engine, parser=self.parser)
- tm.assert_numpy_array_equal(expected, result)
+ tm.assert_almost_equal(expected, result)
for engine in self.current_engines:
tm.skip_if_no_ne(engine)
- tm.assert_numpy_array_equal(result, pd.eval('~elb', engine=engine,
- parser=self.parser))
+ tm.assert_almost_equal(result, pd.eval('~elb', engine=engine,
+ parser=self.parser))
def check_compound_invert_op(self, lhs, cmp1, rhs):
skip_these = 'in', 'not in'
@@ -374,13 +378,13 @@ def check_compound_invert_op(self, lhs, cmp1, rhs):
else:
expected = ~expected
result = pd.eval(ex, engine=self.engine, parser=self.parser)
- tm.assert_numpy_array_equal(expected, result)
+ tm.assert_almost_equal(expected, result)
# make sure the other engines work the same as this one
for engine in self.current_engines:
tm.skip_if_no_ne(engine)
ev = pd.eval(ex, engine=self.engine, parser=self.parser)
- tm.assert_numpy_array_equal(ev, result)
+ tm.assert_almost_equal(ev, result)
def ex(self, op, var_name='lhs'):
return '{0}{1}'.format(op, var_name)
@@ -728,7 +732,7 @@ def check_alignment(self, result, nlhs, ghs, op):
pass
else:
expected = eval('nlhs {0} ghs'.format(op))
- tm.assert_numpy_array_equal(result, expected)
+ tm.assert_almost_equal(result, expected)
class TestEvalPythonPandas(TestEvalPythonPython):
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index ef94692ea9673..03ccfcab24f58 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -25,7 +25,8 @@
import pandas as pd
from pandas.core.common import (is_sequence, array_equivalent,
is_list_like, is_datetimelike_v_numeric,
- is_datetimelike_v_object, is_number,
+ is_datetimelike_v_object,
+ is_number, is_bool,
needs_i8_conversion, is_categorical_dtype)
from pandas.formats.printing import pprint_thing
from pandas.core.algorithms import take_1d
@@ -157,6 +158,9 @@ def assert_almost_equal(left, right, check_exact=False,
if is_number(left) and is_number(right):
# do not compare numeric classes, like np.float64 and float
pass
+ elif is_bool(left) and is_bool(right):
+ # do not compare bool classes, like np.bool_ and bool
+ pass
else:
if (isinstance(left, np.ndarray) or
isinstance(right, np.ndarray)):
| closes #13338
| https://api.github.com/repos/pandas-dev/pandas/pulls/13339 | 2016-05-31T22:10:29Z | 2016-05-31T23:18:16Z | null | 2016-05-31T23:18:16Z |
BUG: upcasting on reshaping ops #13247 | diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt
index 42db0388ca5d9..b11b27716ce8b 100644
--- a/doc/source/whatsnew/v0.19.0.txt
+++ b/doc/source/whatsnew/v0.19.0.txt
@@ -43,7 +43,7 @@ Backwards incompatible API changes
.. _whatsnew_0190.api:
-
+- Concating multiple objects will no longer result in automatically upcast to `float64`, and instead try to find the smallest `dtype` that would suffice (:issue:`13247`)
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 97df81ad6be48..be1ca0af802d1 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -19,6 +19,7 @@
array_equivalent, _is_na_compat,
_maybe_convert_string_to_object,
_maybe_convert_scalar,
+ is_float_dtype, is_numeric_dtype,
is_categorical, is_datetimelike_v_numeric,
is_numeric_v_string_like, is_extension_type)
import pandas.core.algorithms as algos
@@ -4443,6 +4444,8 @@ def _lcd_dtype(l):
return np.dtype('int%s' % (lcd.itemsize * 8 * 2))
return lcd
+ elif have_int and have_float and not have_complex:
+ return np.dtype('float64')
elif have_complex:
return np.dtype('c16')
else:
@@ -4785,6 +4788,8 @@ def get_empty_dtype_and_na(join_units):
upcast_cls = 'datetime'
elif is_timedelta64_dtype(dtype):
upcast_cls = 'timedelta'
+ elif is_float_dtype(dtype) or is_numeric_dtype(dtype):
+ upcast_cls = dtype.name
else:
upcast_cls = 'float'
@@ -4809,8 +4814,6 @@ def get_empty_dtype_and_na(join_units):
return np.dtype(np.bool_), None
elif 'category' in upcast_classes:
return np.dtype(np.object_), np.nan
- elif 'float' in upcast_classes:
- return np.dtype(np.float64), np.nan
elif 'datetimetz' in upcast_classes:
dtype = upcast_classes['datetimetz']
return dtype[0], tslib.iNaT
@@ -4819,7 +4822,17 @@ def get_empty_dtype_and_na(join_units):
elif 'timedelta' in upcast_classes:
return np.dtype('m8[ns]'), tslib.iNaT
else: # pragma
- raise AssertionError("invalid dtype determination in get_concat_dtype")
+ g = np.find_common_type(upcast_classes, [])
+ if is_float_dtype(g):
+ return g, g.type(np.nan)
+ elif is_numeric_dtype(g):
+ if has_none_blocks:
+ return np.float64, np.nan
+ else:
+ return g, None
+ else:
+ msg = "invalid dtype determination in get_concat_dtype"
+ raise AssertionError(msg)
def concatenate_join_units(join_units, concat_axis, copy):
@@ -5083,7 +5096,6 @@ def is_null(self):
return True
def get_reindexed_values(self, empty_dtype, upcasted_na):
-
if upcasted_na is None:
# No upcasting is necessary
fill_value = self.block.fill_value
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index b86b248ead290..36d4f18dd6a24 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -4035,11 +4035,11 @@ def f():
self.assertRaises(ValueError, f)
- # these are coerced to float unavoidably (as its a list-like to begin)
+ # these are coerced to object unavoidably (as its a list-like to begin)
df = DataFrame(columns=['A', 'B'])
df.loc[3] = [6, 7]
assert_frame_equal(df, DataFrame(
- [[6, 7]], index=[3], columns=['A', 'B'], dtype='float64'))
+ [[6, 7]], index=[3], columns=['A', 'B'], dtype='object'))
def test_partial_setting_with_datetimelike_dtype(self):
diff --git a/pandas/tests/test_internals.py b/pandas/tests/test_internals.py
index 6a97f195abba7..44e0a42d1360a 100644
--- a/pandas/tests/test_internals.py
+++ b/pandas/tests/test_internals.py
@@ -655,7 +655,7 @@ def test_interleave(self):
mgr = create_mgr('a: f8; b: i8')
self.assertEqual(mgr.as_matrix().dtype, 'f8')
mgr = create_mgr('a: f4; b: i8')
- self.assertEqual(mgr.as_matrix().dtype, 'f4')
+ self.assertEqual(mgr.as_matrix().dtype, 'f8')
mgr = create_mgr('a: f4; b: i8; d: object')
self.assertEqual(mgr.as_matrix().dtype, 'object')
mgr = create_mgr('a: bool; b: i8')
diff --git a/pandas/tools/tests/test_concat.py b/pandas/tools/tests/test_concat.py
index 9d9b0635e0f35..f43d98fda6398 100644
--- a/pandas/tools/tests/test_concat.py
+++ b/pandas/tools/tests/test_concat.py
@@ -1031,6 +1031,40 @@ def test_concat_invalid_first_argument(self):
expected = read_csv(StringIO(data))
assert_frame_equal(result, expected)
+ def test_concat_no_unnecessary_upcasts(self):
+ # fixes #13247
+
+ for pdt in [pd.Series, pd.DataFrame, pd.Panel, pd.Panel4D]:
+ dims = pdt().ndim
+ for dt in np.sctypes['float']:
+ dfs = [pdt(np.array([1], dtype=dt, ndmin=dims)),
+ pdt(np.array([np.nan], dtype=dt, ndmin=dims)),
+ pdt(np.array([5], dtype=dt, ndmin=dims))]
+ x = pd.concat(dfs)
+ self.assertTrue(x.values.dtype == dt)
+
+ for dt in (np.sctypes['int'] + np.sctypes['uint']):
+ dfs = [pdt(np.array([1], dtype=dt, ndmin=dims)),
+ pdt(np.array([5], dtype=dt, ndmin=dims))]
+ x = pd.concat(dfs)
+ self.assertTrue(x.values.dtype == dt)
+
+ objs = []
+ objs.append(pdt(np.array([1], dtype=np.float32, ndmin=dims)))
+ objs.append(pdt(np.array([1], dtype=np.float16, ndmin=dims)))
+ self.assertTrue(pd.concat(objs).values.dtype == np.float32)
+
+ objs = []
+ objs.append(pdt(np.array([1], dtype=np.int32, ndmin=dims)))
+ objs.append(pdt(np.array([1], dtype=np.int64, ndmin=dims)))
+ self.assertTrue(pd.concat(objs).values.dtype == np.int64)
+
+ # not sure what is the best answer here
+ objs = []
+ objs.append(pdt(np.array([1], dtype=np.int32, ndmin=dims)))
+ objs.append(pdt(np.array([1], dtype=np.float16, ndmin=dims)))
+ self.assertTrue(pd.concat(objs).values.dtype == np.float64)
+
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
| - [x] closes #13247
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/13337 | 2016-05-31T21:42:16Z | 2017-03-07T21:13:35Z | null | 2017-03-14T12:27:47Z |
ENH: Series has gained the properties .is_monotonic* | diff --git a/doc/source/api.rst b/doc/source/api.rst
index 9e7ae2357c541..0e893308dd935 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -354,6 +354,9 @@ Computations / Descriptive Stats
Series.unique
Series.nunique
Series.is_unique
+ Series.is_monotonic
+ Series.is_monotonic_increasing
+ Series.is_monotonic_decreasing
Series.value_counts
Reindexing / Selection / Label manipulation
diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 33a48671a9b65..3fc1a69cb600e 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -92,7 +92,7 @@ Other enhancements
- ``pd.read_html()`` has gained support for the ``decimal`` option (:issue:`12907`)
- ``eval``'s upcasting rules for ``float32`` types have been updated to be more consistent with NumPy's rules. New behavior will not upcast to ``float64`` if you multiply a pandas ``float32`` object by a scalar float64. (:issue:`12388`)
-
+- ``Series`` has gained the properties ``.is_monotonic``, ``.is_monotonic_increasing``, ``.is_monotonic_decreasing``, similar to ``Index`` (:issue:`13336`)
.. _whatsnew_0182.api:
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 36f1f24fec6f7..bf21455bfac91 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -995,6 +995,32 @@ def is_unique(self):
"""
return self.nunique() == len(self)
+ @property
+ def is_monotonic(self):
+ """
+ Return boolean if values in the object are monotonic
+
+ Returns
+ -------
+ is_monotonic : boolean
+ """
+ from pandas import Index
+ return Index(self).is_monotonic
+ is_monotonic_increasing = is_monotonic
+
+ @property
+ def is_monotonic_decreasing(self):
+ """
+ Return boolean if values in the object are
+ monotonic_decreasing
+
+ Returns
+ -------
+ is_monotonic_decreasing : boolean
+ """
+ from pandas import Index
+ return Index(self).is_monotonic_decreasing
+
def memory_usage(self, deep=False):
"""
Memory usage of my values
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index c190b0d9e3bb0..433f0f4bc67f5 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -1397,6 +1397,23 @@ def test_is_unique(self):
s = Series(np.arange(1000))
self.assertTrue(s.is_unique)
+ def test_is_monotonic(self):
+
+ s = Series(np.random.randint(0, 10, size=1000))
+ self.assertFalse(s.is_monotonic)
+ s = Series(np.arange(1000))
+ self.assertTrue(s.is_monotonic)
+ self.assertTrue(s.is_monotonic_increasing)
+ s = Series(np.arange(1000, 0, -1))
+ self.assertTrue(s.is_monotonic_decreasing)
+
+ s = Series(pd.date_range('20130101', periods=10))
+ self.assertTrue(s.is_monotonic)
+ self.assertTrue(s.is_monotonic_increasing)
+ s = Series(list(reversed(s.tolist())))
+ self.assertFalse(s.is_monotonic)
+ self.assertTrue(s.is_monotonic_decreasing)
+
def test_sort_values(self):
ts = self.ts.copy()
| https://api.github.com/repos/pandas-dev/pandas/pulls/13336 | 2016-05-31T20:49:08Z | 2016-05-31T21:49:29Z | null | 2016-05-31T21:49:29Z |
|
TST: more strict testing in lint.sh | diff --git a/ci/lint.sh b/ci/lint.sh
index eb4c655e8bd3e..a4c960084040f 100755
--- a/ci/lint.sh
+++ b/ci/lint.sh
@@ -20,7 +20,7 @@ if [ "$LINT" ]; then
echo "Linting DONE"
echo "Check for invalid testing"
- grep -r --include '*.py' --exclude nosetester.py --exclude testing.py 'numpy.testing' pandas
+ grep -r -E --include '*.py' --exclude nosetester.py --exclude testing.py '(numpy|np)\.testing' pandas
if [ $? = "0" ]; then
RET=1
fi
diff --git a/pandas/io/tests/json/test_pandas.py b/pandas/io/tests/json/test_pandas.py
index 43b8d6b9563f1..9f8aedc2e399e 100644
--- a/pandas/io/tests/json/test_pandas.py
+++ b/pandas/io/tests/json/test_pandas.py
@@ -100,7 +100,7 @@ def test_frame_non_unique_index(self):
orient='split'))
unser = read_json(df.to_json(orient='records'), orient='records')
self.assert_index_equal(df.columns, unser.columns)
- np.testing.assert_equal(df.values, unser.values)
+ tm.assert_almost_equal(df.values, unser.values)
unser = read_json(df.to_json(orient='values'), orient='values')
tm.assert_numpy_array_equal(df.values, unser.values)
diff --git a/pandas/io/tests/test_packers.py b/pandas/io/tests/test_packers.py
index b647ec6b25717..ad7d6c3c9f94f 100644
--- a/pandas/io/tests/test_packers.py
+++ b/pandas/io/tests/test_packers.py
@@ -671,14 +671,14 @@ def _test_small_strings_no_warn(self, compress):
with tm.assert_produces_warning(None):
empty_unpacked = self.encode_decode(empty, compress=compress)
- np.testing.assert_array_equal(empty_unpacked, empty)
+ tm.assert_numpy_array_equal(empty_unpacked, empty)
self.assertTrue(empty_unpacked.flags.writeable)
char = np.array([ord(b'a')], dtype='uint8')
with tm.assert_produces_warning(None):
char_unpacked = self.encode_decode(char, compress=compress)
- np.testing.assert_array_equal(char_unpacked, char)
+ tm.assert_numpy_array_equal(char_unpacked, char)
self.assertTrue(char_unpacked.flags.writeable)
# if this test fails I am sorry because the interpreter is now in a
# bad state where b'a' points to 98 == ord(b'b').
@@ -688,7 +688,7 @@ def _test_small_strings_no_warn(self, compress):
# always be the same (unless we were able to mutate the shared
# character singleton in which case ord(b'a') == ord(b'b').
self.assertEqual(ord(b'a'), ord(u'a'))
- np.testing.assert_array_equal(
+ tm.assert_numpy_array_equal(
char_unpacked,
np.array([ord(b'b')], dtype='uint8'),
)
diff --git a/pandas/src/testing.pyx b/pandas/src/testing.pyx
index 9f102ded597fd..6780cf311c244 100644
--- a/pandas/src/testing.pyx
+++ b/pandas/src/testing.pyx
@@ -55,7 +55,9 @@ cpdef assert_dict_equal(a, b, bint compare_keys=True):
return True
-cpdef assert_almost_equal(a, b, bint check_less_precise=False, check_dtype=True,
+cpdef assert_almost_equal(a, b,
+ check_less_precise=False,
+ bint check_dtype=True,
obj=None, lobj=None, robj=None):
"""Check that left and right objects are almost equal.
@@ -63,9 +65,10 @@ cpdef assert_almost_equal(a, b, bint check_less_precise=False, check_dtype=True,
----------
a : object
b : object
- check_less_precise : bool, default False
+ check_less_precise : bool or int, default False
Specify comparison precision.
5 digits (False) or 3 digits (True) after decimal points are compared.
+ If an integer, then this will be the number of decimal points to compare
check_dtype: bool, default True
check dtype if both a and b are np.ndarray
obj : str, default None
@@ -91,6 +94,8 @@ cpdef assert_almost_equal(a, b, bint check_less_precise=False, check_dtype=True,
if robj is None:
robj = b
+ assert isinstance(check_less_precise, (int, bool))
+
if isinstance(a, dict) or isinstance(b, dict):
return assert_dict_equal(a, b)
@@ -145,7 +150,7 @@ cpdef assert_almost_equal(a, b, bint check_less_precise=False, check_dtype=True,
for i in xrange(len(a)):
try:
- assert_almost_equal(a[i], b[i], check_less_precise)
+ assert_almost_equal(a[i], b[i], check_less_precise=check_less_precise)
except AssertionError:
is_unequal = True
diff += 1
@@ -173,11 +178,12 @@ cpdef assert_almost_equal(a, b, bint check_less_precise=False, check_dtype=True,
# inf comparison
return True
- decimal = 5
-
- # deal with differing dtypes
- if check_less_precise:
+ if check_less_precise is True:
decimal = 3
+ elif check_less_precise is False:
+ decimal = 5
+ else:
+ decimal = check_less_precise
fa, fb = a, b
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index be8468d426946..8af93ad0ecb2e 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -585,7 +585,7 @@ def test_group_var_generic_1d(self):
expected_counts = counts + 3
self.algo(out, counts, values, labels)
- np.testing.assert_allclose(out, expected_out, self.rtol)
+ self.assertTrue(np.allclose(out, expected_out, self.rtol))
tm.assert_numpy_array_equal(counts, expected_counts)
def test_group_var_generic_1d_flat_labels(self):
@@ -601,7 +601,7 @@ def test_group_var_generic_1d_flat_labels(self):
self.algo(out, counts, values, labels)
- np.testing.assert_allclose(out, expected_out, self.rtol)
+ self.assertTrue(np.allclose(out, expected_out, self.rtol))
tm.assert_numpy_array_equal(counts, expected_counts)
def test_group_var_generic_2d_all_finite(self):
@@ -616,7 +616,7 @@ def test_group_var_generic_2d_all_finite(self):
expected_counts = counts + 2
self.algo(out, counts, values, labels)
- np.testing.assert_allclose(out, expected_out, self.rtol)
+ self.assertTrue(np.allclose(out, expected_out, self.rtol))
tm.assert_numpy_array_equal(counts, expected_counts)
def test_group_var_generic_2d_some_nan(self):
@@ -631,11 +631,11 @@ def test_group_var_generic_2d_some_nan(self):
expected_out = np.vstack([values[:, 0]
.reshape(5, 2, order='F')
.std(ddof=1, axis=1) ** 2,
- np.nan * np.ones(5)]).T
+ np.nan * np.ones(5)]).T.astype(self.dtype)
expected_counts = counts + 2
self.algo(out, counts, values, labels)
- np.testing.assert_allclose(out, expected_out, self.rtol)
+ tm.assert_almost_equal(out, expected_out, check_less_precise=6)
tm.assert_numpy_array_equal(counts, expected_counts)
def test_group_var_constant(self):
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index e244a04127949..904bedde03312 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -799,30 +799,31 @@ def setUp(self):
def test_nanvar_all_finite(self):
samples = self.samples
actual_variance = nanops.nanvar(samples)
- np.testing.assert_almost_equal(actual_variance, self.variance,
- decimal=2)
+ tm.assert_almost_equal(actual_variance, self.variance,
+ check_less_precise=2)
def test_nanvar_nans(self):
samples = np.nan * np.ones(2 * self.samples.shape[0])
samples[::2] = self.samples
actual_variance = nanops.nanvar(samples, skipna=True)
- np.testing.assert_almost_equal(actual_variance, self.variance,
- decimal=2)
+ tm.assert_almost_equal(actual_variance, self.variance,
+ check_less_precise=2)
actual_variance = nanops.nanvar(samples, skipna=False)
- np.testing.assert_almost_equal(actual_variance, np.nan, decimal=2)
+ tm.assert_almost_equal(actual_variance, np.nan, check_less_precise=2)
def test_nanstd_nans(self):
samples = np.nan * np.ones(2 * self.samples.shape[0])
samples[::2] = self.samples
actual_std = nanops.nanstd(samples, skipna=True)
- np.testing.assert_almost_equal(actual_std, self.variance ** 0.5,
- decimal=2)
+ tm.assert_almost_equal(actual_std, self.variance ** 0.5,
+ check_less_precise=2)
actual_std = nanops.nanvar(samples, skipna=False)
- np.testing.assert_almost_equal(actual_std, np.nan, decimal=2)
+ tm.assert_almost_equal(actual_std, np.nan,
+ check_less_precise=2)
def test_nanvar_axis(self):
# Generate some sample data.
@@ -831,8 +832,8 @@ def test_nanvar_axis(self):
samples = np.vstack([samples_norm, samples_unif])
actual_variance = nanops.nanvar(samples, axis=1)
- np.testing.assert_array_almost_equal(actual_variance, np.array(
- [self.variance, 1.0 / 12]), decimal=2)
+ tm.assert_almost_equal(actual_variance, np.array(
+ [self.variance, 1.0 / 12]), check_less_precise=2)
def test_nanvar_ddof(self):
n = 5
@@ -845,13 +846,16 @@ def test_nanvar_ddof(self):
# The unbiased estimate.
var = 1.0 / 12
- np.testing.assert_almost_equal(variance_1, var, decimal=2)
+ tm.assert_almost_equal(variance_1, var,
+ check_less_precise=2)
+
# The underestimated variance.
- np.testing.assert_almost_equal(variance_0, (n - 1.0) / n * var,
- decimal=2)
+ tm.assert_almost_equal(variance_0, (n - 1.0) / n * var,
+ check_less_precise=2)
+
# The overestimated variance.
- np.testing.assert_almost_equal(variance_2, (n - 1.0) / (n - 2.0) * var,
- decimal=2)
+ tm.assert_almost_equal(variance_2, (n - 1.0) / (n - 2.0) * var,
+ check_less_precise=2)
def test_ground_truth(self):
# Test against values that were precomputed with Numpy.
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 7792a1f5d3509..b1f09ad2685e3 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -2301,8 +2301,8 @@ def test_update_raise(self):
[[1.5, np.nan, 3.], [1.5, np.nan, 3.], [1.5, np.nan, 3.],
[1.5, np.nan, 3.]]])
- np.testing.assert_raises(Exception, pan.update, *(pan, ),
- **{'raise_conflict': True})
+ self.assertRaises(Exception, pan.update, *(pan, ),
+ **{'raise_conflict': True})
def test_all_any(self):
self.assertTrue((self.panel.all(axis=0).values == nanall(
diff --git a/pandas/tests/test_testing.py b/pandas/tests/test_testing.py
index 9cc76591e9b7b..c4e864a909c03 100644
--- a/pandas/tests/test_testing.py
+++ b/pandas/tests/test_testing.py
@@ -519,12 +519,18 @@ def test_less_precise(self):
self.assertRaises(AssertionError, assert_series_equal, s1, s2)
self._assert_equal(s1, s2, check_less_precise=True)
+ for i in range(4):
+ self._assert_equal(s1, s2, check_less_precise=i)
+ self.assertRaises(AssertionError, assert_series_equal, s1, s2, 10)
s1 = Series([0.12345], dtype='float32')
s2 = Series([0.12346], dtype='float32')
self.assertRaises(AssertionError, assert_series_equal, s1, s2)
self._assert_equal(s1, s2, check_less_precise=True)
+ for i in range(4):
+ self._assert_equal(s1, s2, check_less_precise=i)
+ self.assertRaises(AssertionError, assert_series_equal, s1, s2, 10)
# even less than less precise
s1 = Series([0.1235], dtype='float32')
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index f2b5bf7d2739d..ef94692ea9673 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -116,18 +116,40 @@ def assertNotAlmostEquals(self, *args, **kwargs):
def assert_almost_equal(left, right, check_exact=False,
- check_dtype='equiv', **kwargs):
+ check_dtype='equiv', check_less_precise=False,
+ **kwargs):
+ """Check that left and right Index are equal.
+
+ Parameters
+ ----------
+ left : object
+ right : object
+ check_exact : bool, default True
+ Whether to compare number exactly.
+ check_dtype: bool, default True
+ check dtype if both a and b are the same type
+ check_less_precise : bool or int, default False
+ Specify comparison precision. Only used when check_exact is False.
+ 5 digits (False) or 3 digits (True) after decimal points are compared.
+ If int, then specify the digits to compare
+ """
if isinstance(left, pd.Index):
return assert_index_equal(left, right, check_exact=check_exact,
- exact=check_dtype, **kwargs)
+ exact=check_dtype,
+ check_less_precise=check_less_precise,
+ **kwargs)
elif isinstance(left, pd.Series):
return assert_series_equal(left, right, check_exact=check_exact,
- check_dtype=check_dtype, **kwargs)
+ check_dtype=check_dtype,
+ check_less_precise=check_less_precise,
+ **kwargs)
elif isinstance(left, pd.DataFrame):
return assert_frame_equal(left, right, check_exact=check_exact,
- check_dtype=check_dtype, **kwargs)
+ check_dtype=check_dtype,
+ check_less_precise=check_less_precise,
+ **kwargs)
else:
# other sequences
@@ -142,8 +164,11 @@ def assert_almost_equal(left, right, check_exact=False,
else:
obj = 'Input'
assert_class_equal(left, right, obj=obj)
- return _testing.assert_almost_equal(left, right,
- check_dtype=check_dtype, **kwargs)
+ return _testing.assert_almost_equal(
+ left, right,
+ check_dtype=check_dtype,
+ check_less_precise=check_less_precise,
+ **kwargs)
def assert_dict_equal(left, right, compare_keys=True):
@@ -690,9 +715,10 @@ def assert_index_equal(left, right, exact='equiv', check_names=True,
Int64Index as well
check_names : bool, default True
Whether to check the names attribute.
- check_less_precise : bool, default False
+ check_less_precise : bool or int, default False
Specify comparison precision. Only used when check_exact is False.
5 digits (False) or 3 digits (True) after decimal points are compared.
+ If int, then specify the digits to compare
check_exact : bool, default True
Whether to compare number exactly.
check_categorical : bool, default True
@@ -1040,9 +1066,10 @@ def assert_series_equal(left, right, check_dtype=True,
are identical.
check_series_type : bool, default False
Whether to check the Series class is identical.
- check_less_precise : bool, default False
+ check_less_precise : bool or int, default False
Specify comparison precision. Only used when check_exact is False.
5 digits (False) or 3 digits (True) after decimal points are compared.
+ If int, then specify the digits to compare
check_exact : bool, default False
Whether to compare number exactly.
check_names : bool, default True
@@ -1106,7 +1133,7 @@ def assert_series_equal(left, right, check_dtype=True,
check_dtype=check_dtype)
else:
_testing.assert_almost_equal(left.get_values(), right.get_values(),
- check_less_precise,
+ check_less_precise=check_less_precise,
check_dtype=check_dtype,
obj='{0}'.format(obj))
@@ -1150,9 +1177,10 @@ def assert_frame_equal(left, right, check_dtype=True,
are identical.
check_frame_type : bool, default False
Whether to check the DataFrame class is identical.
- check_less_precise : bool, default False
+ check_less_precise : bool or it, default False
Specify comparison precision. Only used when check_exact is False.
5 digits (False) or 3 digits (True) after decimal points are compared.
+ If int, then specify the digits to compare
check_names : bool, default True
Whether to check the Index names attribute.
by_blocks : bool, default False
@@ -1259,9 +1287,10 @@ def assert_panelnd_equal(left, right,
Whether to check the Panel dtype is identical.
check_panel_type : bool, default False
Whether to check the Panel class is identical.
- check_less_precise : bool, default False
+ check_less_precise : bool or int, default False
Specify comparison precision. Only used when check_exact is False.
5 digits (False) or 3 digits (True) after decimal points are compared.
+ If int, then specify the digits to compare
assert_func : function for comparing data
check_names : bool, default True
Whether to check the Index names attribute.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13334 | 2016-05-31T13:10:12Z | 2016-05-31T13:53:44Z | null | 2016-05-31T13:53:44Z |
|
BUG: Fix maybe_convert_numeric for unhashable objects | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 2b67aca1dcf74..a552b67288b57 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -317,6 +317,7 @@ Bug Fixes
- Bug in ``groupby`` where ``apply`` returns different result depending on whether first result is ``None`` or not (:issue:`12824`)
+- Bug in ``pd.to_numeric`` when ``errors='coerce'`` and input contains non-hashable objects (:issue:`13324`)
- Bug in ``Categorical.remove_unused_categories()`` changes ``.codes`` dtype to platform int (:issue:`13261`)
diff --git a/pandas/src/inference.pyx b/pandas/src/inference.pyx
index e2c59a34bdf21..d4e149eb09b65 100644
--- a/pandas/src/inference.pyx
+++ b/pandas/src/inference.pyx
@@ -569,7 +569,7 @@ def maybe_convert_numeric(object[:] values, set na_values,
for i in range(n):
val = values[i]
- if val in na_values:
+ if val.__hash__ is not None and val in na_values:
floats[i] = complexes[i] = nan
seen_float = True
elif util.is_float_object(val):
diff --git a/pandas/tests/test_infer_and_convert.py b/pandas/tests/test_infer_and_convert.py
index 06e2a82e07dee..075e31034b261 100644
--- a/pandas/tests/test_infer_and_convert.py
+++ b/pandas/tests/test_infer_and_convert.py
@@ -102,6 +102,12 @@ def test_scientific_no_exponent(self):
result = lib.maybe_convert_numeric(arr, set(), False, True)
self.assertTrue(np.all(np.isnan(result)))
+ def test_convert_non_hashable(self):
+ # Test for Bug #13324
+ arr = np.array([[10.0, 2], 1.0, 'apple'])
+ result = lib.maybe_convert_numeric(arr, set(), False, True)
+ tm.assert_numpy_array_equal(result, np.array([np.nan, 1.0, np.nan]))
+
class TestTypeInference(tm.TestCase):
_multiprocess_can_split_ = True
diff --git a/pandas/tools/tests/test_util.py b/pandas/tools/tests/test_util.py
index 4e704554f982f..c592b33bdab9a 100644
--- a/pandas/tools/tests/test_util.py
+++ b/pandas/tools/tests/test_util.py
@@ -279,6 +279,18 @@ def test_period(self):
# res = pd.to_numeric(pd.Series(idx, name='xxx'))
# tm.assert_series_equal(res, pd.Series(idx.asi8, name='xxx'))
+ def test_non_hashable(self):
+ # Test for Bug #13324
+ s = pd.Series([[10.0, 2], 1.0, 'apple'])
+ res = pd.to_numeric(s, errors='coerce')
+ tm.assert_series_equal(res, pd.Series([np.nan, 1.0, np.nan]))
+
+ res = pd.to_numeric(s, errors='ignore')
+ tm.assert_series_equal(res, pd.Series([[10.0, 2], 1.0, 'apple']))
+
+ with self.assertRaisesRegexp(TypeError, "Invalid object type"):
+ pd.to_numeric(s)
+
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
| - [x] closes #13324
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/13326 | 2016-05-30T16:55:24Z | 2016-05-31T15:43:14Z | null | 2016-06-01T13:46:23Z |
TST: remove tests_tseries.py and distribute to other tests files | diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 4758c7f979da0..be8468d426946 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -3,15 +3,20 @@
import numpy as np
from numpy.random import RandomState
+from numpy import nan
+import datetime
-from pandas.core.api import Series, Categorical, CategoricalIndex
+from pandas import Series, Categorical, CategoricalIndex, Index
import pandas as pd
from pandas import compat
+import pandas.algos as _algos
+from pandas.compat import lrange
import pandas.core.algorithms as algos
import pandas.util.testing as tm
import pandas.hashtable as hashtable
from pandas.compat.numpy import np_array_datetime64_compat
+from pandas.util.testing import assert_almost_equal
class TestMatch(tm.TestCase):
@@ -705,6 +710,315 @@ def test_unique_label_indices():
tm.assert_numpy_array_equal(left, right)
+def test_rank():
+ tm._skip_if_no_scipy()
+ from scipy.stats import rankdata
+
+ def _check(arr):
+ mask = ~np.isfinite(arr)
+ arr = arr.copy()
+ result = _algos.rank_1d_float64(arr)
+ arr[mask] = np.inf
+ exp = rankdata(arr)
+ exp[mask] = nan
+ assert_almost_equal(result, exp)
+
+ _check(np.array([nan, nan, 5., 5., 5., nan, 1, 2, 3, nan]))
+ _check(np.array([4., nan, 5., 5., 5., nan, 1, 2, 4., nan]))
+
+
+def test_pad_backfill_object_segfault():
+
+ old = np.array([], dtype='O')
+ new = np.array([datetime.datetime(2010, 12, 31)], dtype='O')
+
+ result = _algos.pad_object(old, new)
+ expected = np.array([-1], dtype=np.int64)
+ assert (np.array_equal(result, expected))
+
+ result = _algos.pad_object(new, old)
+ expected = np.array([], dtype=np.int64)
+ assert (np.array_equal(result, expected))
+
+ result = _algos.backfill_object(old, new)
+ expected = np.array([-1], dtype=np.int64)
+ assert (np.array_equal(result, expected))
+
+ result = _algos.backfill_object(new, old)
+ expected = np.array([], dtype=np.int64)
+ assert (np.array_equal(result, expected))
+
+
+def test_arrmap():
+ values = np.array(['foo', 'foo', 'bar', 'bar', 'baz', 'qux'], dtype='O')
+ result = _algos.arrmap_object(values, lambda x: x in ['foo', 'bar'])
+ assert (result.dtype == np.bool_)
+
+
+class TestTseriesUtil(tm.TestCase):
+ _multiprocess_can_split_ = True
+
+ def test_combineFunc(self):
+ pass
+
+ def test_reindex(self):
+ pass
+
+ def test_isnull(self):
+ pass
+
+ def test_groupby(self):
+ pass
+
+ def test_groupby_withnull(self):
+ pass
+
+ def test_backfill(self):
+ old = Index([1, 5, 10])
+ new = Index(lrange(12))
+
+ filler = _algos.backfill_int64(old.values, new.values)
+
+ expect_filler = np.array([0, 0, 1, 1, 1, 1,
+ 2, 2, 2, 2, 2, -1], dtype=np.int64)
+ self.assert_numpy_array_equal(filler, expect_filler)
+
+ # corner case
+ old = Index([1, 4])
+ new = Index(lrange(5, 10))
+ filler = _algos.backfill_int64(old.values, new.values)
+
+ expect_filler = np.array([-1, -1, -1, -1, -1], dtype=np.int64)
+ self.assert_numpy_array_equal(filler, expect_filler)
+
+ def test_pad(self):
+ old = Index([1, 5, 10])
+ new = Index(lrange(12))
+
+ filler = _algos.pad_int64(old.values, new.values)
+
+ expect_filler = np.array([-1, 0, 0, 0, 0, 1,
+ 1, 1, 1, 1, 2, 2], dtype=np.int64)
+ self.assert_numpy_array_equal(filler, expect_filler)
+
+ # corner case
+ old = Index([5, 10])
+ new = Index(lrange(5))
+ filler = _algos.pad_int64(old.values, new.values)
+ expect_filler = np.array([-1, -1, -1, -1, -1], dtype=np.int64)
+ self.assert_numpy_array_equal(filler, expect_filler)
+
+
+def test_left_join_indexer_unique():
+ a = np.array([1, 2, 3, 4, 5], dtype=np.int64)
+ b = np.array([2, 2, 3, 4, 4], dtype=np.int64)
+
+ result = _algos.left_join_indexer_unique_int64(b, a)
+ expected = np.array([1, 1, 2, 3, 3], dtype=np.int64)
+ assert (np.array_equal(result, expected))
+
+
+def test_left_outer_join_bug():
+ left = np.array([0, 1, 0, 1, 1, 2, 3, 1, 0, 2, 1, 2, 0, 1, 1, 2, 3, 2, 3,
+ 2, 1, 1, 3, 0, 3, 2, 3, 0, 0, 2, 3, 2, 0, 3, 1, 3, 0, 1,
+ 3, 0, 0, 1, 0, 3, 1, 0, 1, 0, 1, 1, 0, 2, 2, 2, 2, 2, 0,
+ 3, 1, 2, 0, 0, 3, 1, 3, 2, 2, 0, 1, 3, 0, 2, 3, 2, 3, 3,
+ 2, 3, 3, 1, 3, 2, 0, 0, 3, 1, 1, 1, 0, 2, 3, 3, 1, 2, 0,
+ 3, 1, 2, 0, 2], dtype=np.int64)
+
+ right = np.array([3, 1], dtype=np.int64)
+ max_groups = 4
+
+ lidx, ridx = _algos.left_outer_join(left, right, max_groups, sort=False)
+
+ exp_lidx = np.arange(len(left))
+ exp_ridx = -np.ones(len(left))
+ exp_ridx[left == 1] = 1
+ exp_ridx[left == 3] = 0
+
+ assert (np.array_equal(lidx, exp_lidx))
+ assert (np.array_equal(ridx, exp_ridx))
+
+
+def test_inner_join_indexer():
+ a = np.array([1, 2, 3, 4, 5], dtype=np.int64)
+ b = np.array([0, 3, 5, 7, 9], dtype=np.int64)
+
+ index, ares, bres = _algos.inner_join_indexer_int64(a, b)
+
+ index_exp = np.array([3, 5], dtype=np.int64)
+ assert_almost_equal(index, index_exp)
+
+ aexp = np.array([2, 4], dtype=np.int64)
+ bexp = np.array([1, 2], dtype=np.int64)
+ assert_almost_equal(ares, aexp)
+ assert_almost_equal(bres, bexp)
+
+ a = np.array([5], dtype=np.int64)
+ b = np.array([5], dtype=np.int64)
+
+ index, ares, bres = _algos.inner_join_indexer_int64(a, b)
+ tm.assert_numpy_array_equal(index, np.array([5], dtype=np.int64))
+ tm.assert_numpy_array_equal(ares, np.array([0], dtype=np.int64))
+ tm.assert_numpy_array_equal(bres, np.array([0], dtype=np.int64))
+
+
+def test_outer_join_indexer():
+ a = np.array([1, 2, 3, 4, 5], dtype=np.int64)
+ b = np.array([0, 3, 5, 7, 9], dtype=np.int64)
+
+ index, ares, bres = _algos.outer_join_indexer_int64(a, b)
+
+ index_exp = np.array([0, 1, 2, 3, 4, 5, 7, 9], dtype=np.int64)
+ assert_almost_equal(index, index_exp)
+
+ aexp = np.array([-1, 0, 1, 2, 3, 4, -1, -1], dtype=np.int64)
+ bexp = np.array([0, -1, -1, 1, -1, 2, 3, 4], dtype=np.int64)
+ assert_almost_equal(ares, aexp)
+ assert_almost_equal(bres, bexp)
+
+ a = np.array([5], dtype=np.int64)
+ b = np.array([5], dtype=np.int64)
+
+ index, ares, bres = _algos.outer_join_indexer_int64(a, b)
+ tm.assert_numpy_array_equal(index, np.array([5], dtype=np.int64))
+ tm.assert_numpy_array_equal(ares, np.array([0], dtype=np.int64))
+ tm.assert_numpy_array_equal(bres, np.array([0], dtype=np.int64))
+
+
+def test_left_join_indexer():
+ a = np.array([1, 2, 3, 4, 5], dtype=np.int64)
+ b = np.array([0, 3, 5, 7, 9], dtype=np.int64)
+
+ index, ares, bres = _algos.left_join_indexer_int64(a, b)
+
+ assert_almost_equal(index, a)
+
+ aexp = np.array([0, 1, 2, 3, 4], dtype=np.int64)
+ bexp = np.array([-1, -1, 1, -1, 2], dtype=np.int64)
+ assert_almost_equal(ares, aexp)
+ assert_almost_equal(bres, bexp)
+
+ a = np.array([5], dtype=np.int64)
+ b = np.array([5], dtype=np.int64)
+
+ index, ares, bres = _algos.left_join_indexer_int64(a, b)
+ tm.assert_numpy_array_equal(index, np.array([5], dtype=np.int64))
+ tm.assert_numpy_array_equal(ares, np.array([0], dtype=np.int64))
+ tm.assert_numpy_array_equal(bres, np.array([0], dtype=np.int64))
+
+
+def test_left_join_indexer2():
+ idx = Index([1, 1, 2, 5])
+ idx2 = Index([1, 2, 5, 7, 9])
+
+ res, lidx, ridx = _algos.left_join_indexer_int64(idx2.values, idx.values)
+
+ exp_res = np.array([1, 1, 2, 5, 7, 9], dtype=np.int64)
+ assert_almost_equal(res, exp_res)
+
+ exp_lidx = np.array([0, 0, 1, 2, 3, 4], dtype=np.int64)
+ assert_almost_equal(lidx, exp_lidx)
+
+ exp_ridx = np.array([0, 1, 2, 3, -1, -1], dtype=np.int64)
+ assert_almost_equal(ridx, exp_ridx)
+
+
+def test_outer_join_indexer2():
+ idx = Index([1, 1, 2, 5])
+ idx2 = Index([1, 2, 5, 7, 9])
+
+ res, lidx, ridx = _algos.outer_join_indexer_int64(idx2.values, idx.values)
+
+ exp_res = np.array([1, 1, 2, 5, 7, 9], dtype=np.int64)
+ assert_almost_equal(res, exp_res)
+
+ exp_lidx = np.array([0, 0, 1, 2, 3, 4], dtype=np.int64)
+ assert_almost_equal(lidx, exp_lidx)
+
+ exp_ridx = np.array([0, 1, 2, 3, -1, -1], dtype=np.int64)
+ assert_almost_equal(ridx, exp_ridx)
+
+
+def test_inner_join_indexer2():
+ idx = Index([1, 1, 2, 5])
+ idx2 = Index([1, 2, 5, 7, 9])
+
+ res, lidx, ridx = _algos.inner_join_indexer_int64(idx2.values, idx.values)
+
+ exp_res = np.array([1, 1, 2, 5], dtype=np.int64)
+ assert_almost_equal(res, exp_res)
+
+ exp_lidx = np.array([0, 0, 1, 2], dtype=np.int64)
+ assert_almost_equal(lidx, exp_lidx)
+
+ exp_ridx = np.array([0, 1, 2, 3], dtype=np.int64)
+ assert_almost_equal(ridx, exp_ridx)
+
+
+def test_is_lexsorted():
+ failure = [
+ np.array([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
+ 3, 3,
+ 3, 3,
+ 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2,
+ 2, 2, 2, 2, 2, 2, 2,
+ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
+ 1, 1, 1, 1, 1, 1, 1,
+ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
+ 1, 1, 1, 1, 1, 1, 1,
+ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0]),
+ np.array([30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16,
+ 15, 14,
+ 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28,
+ 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
+ 12, 11,
+ 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28, 27, 26, 25,
+ 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10,
+ 9, 8,
+ 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28, 27, 26, 25, 24, 23, 22,
+ 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7,
+ 6, 5,
+ 4, 3, 2, 1, 0])]
+
+ assert (not _algos.is_lexsorted(failure))
+
+# def test_get_group_index():
+# a = np.array([0, 1, 2, 0, 2, 1, 0, 0], dtype=np.int64)
+# b = np.array([1, 0, 3, 2, 0, 2, 3, 0], dtype=np.int64)
+# expected = np.array([1, 4, 11, 2, 8, 6, 3, 0], dtype=np.int64)
+
+# result = lib.get_group_index([a, b], (3, 4))
+
+# assert(np.array_equal(result, expected))
+
+
+def test_groupsort_indexer():
+ a = np.random.randint(0, 1000, 100).astype(np.int64)
+ b = np.random.randint(0, 1000, 100).astype(np.int64)
+
+ result = _algos.groupsort_indexer(a, 1000)[0]
+
+ # need to use a stable sort
+ expected = np.argsort(a, kind='mergesort')
+ assert (np.array_equal(result, expected))
+
+ # compare with lexsort
+ key = a * 1000 + b
+ result = _algos.groupsort_indexer(key, 1000000)[0]
+ expected = np.lexsort((b, a))
+ assert (np.array_equal(result, expected))
+
+
+def test_ensure_platform_int():
+ arr = np.arange(100)
+
+ result = _algos.ensure_platform_int(arr)
+ assert (result is arr)
+
+
if __name__ == '__main__':
import nose
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
diff --git a/pandas/tests/test_infer_and_convert.py b/pandas/tests/test_infer_and_convert.py
new file mode 100644
index 0000000000000..06e2a82e07dee
--- /dev/null
+++ b/pandas/tests/test_infer_and_convert.py
@@ -0,0 +1,384 @@
+# -*- coding: utf-8 -*-
+
+from datetime import datetime, timedelta, date, time
+
+import numpy as np
+import pandas as pd
+import pandas.lib as lib
+import pandas.util.testing as tm
+from pandas import Index
+
+from pandas.compat import long, u, PY2
+
+
+class TestInference(tm.TestCase):
+
+ def test_infer_dtype_bytes(self):
+ compare = 'string' if PY2 else 'bytes'
+
+ # string array of bytes
+ arr = np.array(list('abc'), dtype='S1')
+ self.assertEqual(pd.lib.infer_dtype(arr), compare)
+
+ # object array of bytes
+ arr = arr.astype(object)
+ self.assertEqual(pd.lib.infer_dtype(arr), compare)
+
+ def test_isinf_scalar(self):
+ # GH 11352
+ self.assertTrue(lib.isposinf_scalar(float('inf')))
+ self.assertTrue(lib.isposinf_scalar(np.inf))
+ self.assertFalse(lib.isposinf_scalar(-np.inf))
+ self.assertFalse(lib.isposinf_scalar(1))
+ self.assertFalse(lib.isposinf_scalar('a'))
+
+ self.assertTrue(lib.isneginf_scalar(float('-inf')))
+ self.assertTrue(lib.isneginf_scalar(-np.inf))
+ self.assertFalse(lib.isneginf_scalar(np.inf))
+ self.assertFalse(lib.isneginf_scalar(1))
+ self.assertFalse(lib.isneginf_scalar('a'))
+
+ def test_maybe_convert_numeric_infinities(self):
+ # see gh-13274
+ infinities = ['inf', 'inF', 'iNf', 'Inf',
+ 'iNF', 'InF', 'INf', 'INF']
+ na_values = set(['', 'NULL', 'nan'])
+
+ pos = np.array(['inf'], dtype=np.float64)
+ neg = np.array(['-inf'], dtype=np.float64)
+
+ msg = "Unable to parse string"
+
+ for infinity in infinities:
+ for maybe_int in (True, False):
+ out = lib.maybe_convert_numeric(
+ np.array([infinity], dtype=object),
+ na_values, maybe_int)
+ tm.assert_numpy_array_equal(out, pos)
+
+ out = lib.maybe_convert_numeric(
+ np.array(['-' + infinity], dtype=object),
+ na_values, maybe_int)
+ tm.assert_numpy_array_equal(out, neg)
+
+ out = lib.maybe_convert_numeric(
+ np.array([u(infinity)], dtype=object),
+ na_values, maybe_int)
+ tm.assert_numpy_array_equal(out, pos)
+
+ out = lib.maybe_convert_numeric(
+ np.array(['+' + infinity], dtype=object),
+ na_values, maybe_int)
+ tm.assert_numpy_array_equal(out, pos)
+
+ # too many characters
+ with tm.assertRaisesRegexp(ValueError, msg):
+ lib.maybe_convert_numeric(
+ np.array(['foo_' + infinity], dtype=object),
+ na_values, maybe_int)
+
+ def test_maybe_convert_numeric_post_floatify_nan(self):
+ # see gh-13314
+ data = np.array(['1.200', '-999.000', '4.500'], dtype=object)
+ expected = np.array([1.2, np.nan, 4.5], dtype=np.float64)
+ nan_values = set([-999, -999.0])
+
+ for coerce_type in (True, False):
+ out = lib.maybe_convert_numeric(data, nan_values, coerce_type)
+ tm.assert_numpy_array_equal(out, expected)
+
+ def test_convert_infs(self):
+ arr = np.array(['inf', 'inf', 'inf'], dtype='O')
+ result = lib.maybe_convert_numeric(arr, set(), False)
+ self.assertTrue(result.dtype == np.float64)
+
+ arr = np.array(['-inf', '-inf', '-inf'], dtype='O')
+ result = lib.maybe_convert_numeric(arr, set(), False)
+ self.assertTrue(result.dtype == np.float64)
+
+ def test_scientific_no_exponent(self):
+ # See PR 12215
+ arr = np.array(['42E', '2E', '99e', '6e'], dtype='O')
+ result = lib.maybe_convert_numeric(arr, set(), False, True)
+ self.assertTrue(np.all(np.isnan(result)))
+
+
+class TestTypeInference(tm.TestCase):
+ _multiprocess_can_split_ = True
+
+ def test_length_zero(self):
+ result = lib.infer_dtype(np.array([], dtype='i4'))
+ self.assertEqual(result, 'integer')
+
+ result = lib.infer_dtype([])
+ self.assertEqual(result, 'empty')
+
+ def test_integers(self):
+ arr = np.array([1, 2, 3, np.int64(4), np.int32(5)], dtype='O')
+ result = lib.infer_dtype(arr)
+ self.assertEqual(result, 'integer')
+
+ arr = np.array([1, 2, 3, np.int64(4), np.int32(5), 'foo'], dtype='O')
+ result = lib.infer_dtype(arr)
+ self.assertEqual(result, 'mixed-integer')
+
+ arr = np.array([1, 2, 3, 4, 5], dtype='i4')
+ result = lib.infer_dtype(arr)
+ self.assertEqual(result, 'integer')
+
+ def test_bools(self):
+ arr = np.array([True, False, True, True, True], dtype='O')
+ result = lib.infer_dtype(arr)
+ self.assertEqual(result, 'boolean')
+
+ arr = np.array([np.bool_(True), np.bool_(False)], dtype='O')
+ result = lib.infer_dtype(arr)
+ self.assertEqual(result, 'boolean')
+
+ arr = np.array([True, False, True, 'foo'], dtype='O')
+ result = lib.infer_dtype(arr)
+ self.assertEqual(result, 'mixed')
+
+ arr = np.array([True, False, True], dtype=bool)
+ result = lib.infer_dtype(arr)
+ self.assertEqual(result, 'boolean')
+
+ def test_floats(self):
+ arr = np.array([1., 2., 3., np.float64(4), np.float32(5)], dtype='O')
+ result = lib.infer_dtype(arr)
+ self.assertEqual(result, 'floating')
+
+ arr = np.array([1, 2, 3, np.float64(4), np.float32(5), 'foo'],
+ dtype='O')
+ result = lib.infer_dtype(arr)
+ self.assertEqual(result, 'mixed-integer')
+
+ arr = np.array([1, 2, 3, 4, 5], dtype='f4')
+ result = lib.infer_dtype(arr)
+ self.assertEqual(result, 'floating')
+
+ arr = np.array([1, 2, 3, 4, 5], dtype='f8')
+ result = lib.infer_dtype(arr)
+ self.assertEqual(result, 'floating')
+
+ def test_string(self):
+ pass
+
+ def test_unicode(self):
+ pass
+
+ def test_datetime(self):
+
+ dates = [datetime(2012, 1, x) for x in range(1, 20)]
+ index = Index(dates)
+ self.assertEqual(index.inferred_type, 'datetime64')
+
+ def test_date(self):
+
+ dates = [date(2012, 1, x) for x in range(1, 20)]
+ index = Index(dates)
+ self.assertEqual(index.inferred_type, 'date')
+
+ def test_to_object_array_tuples(self):
+ r = (5, 6)
+ values = [r]
+ result = lib.to_object_array_tuples(values)
+
+ try:
+ # make sure record array works
+ from collections import namedtuple
+ record = namedtuple('record', 'x y')
+ r = record(5, 6)
+ values = [r]
+ result = lib.to_object_array_tuples(values) # noqa
+ except ImportError:
+ pass
+
+ def test_object(self):
+
+ # GH 7431
+ # cannot infer more than this as only a single element
+ arr = np.array([None], dtype='O')
+ result = lib.infer_dtype(arr)
+ self.assertEqual(result, 'mixed')
+
+ def test_categorical(self):
+
+ # GH 8974
+ from pandas import Categorical, Series
+ arr = Categorical(list('abc'))
+ result = lib.infer_dtype(arr)
+ self.assertEqual(result, 'categorical')
+
+ result = lib.infer_dtype(Series(arr))
+ self.assertEqual(result, 'categorical')
+
+ arr = Categorical(list('abc'), categories=['cegfab'], ordered=True)
+ result = lib.infer_dtype(arr)
+ self.assertEqual(result, 'categorical')
+
+ result = lib.infer_dtype(Series(arr))
+ self.assertEqual(result, 'categorical')
+
+
+class TestConvert(tm.TestCase):
+
+ def test_convert_objects(self):
+ arr = np.array(['a', 'b', np.nan, np.nan, 'd', 'e', 'f'], dtype='O')
+ result = lib.maybe_convert_objects(arr)
+ self.assertTrue(result.dtype == np.object_)
+
+ def test_convert_objects_ints(self):
+ # test that we can detect many kinds of integers
+ dtypes = ['i1', 'i2', 'i4', 'i8', 'u1', 'u2', 'u4', 'u8']
+
+ for dtype_str in dtypes:
+ arr = np.array(list(np.arange(20, dtype=dtype_str)), dtype='O')
+ self.assertTrue(arr[0].dtype == np.dtype(dtype_str))
+ result = lib.maybe_convert_objects(arr)
+ self.assertTrue(issubclass(result.dtype.type, np.integer))
+
+ def test_convert_objects_complex_number(self):
+ for dtype in np.sctypes['complex']:
+ arr = np.array(list(1j * np.arange(20, dtype=dtype)), dtype='O')
+ self.assertTrue(arr[0].dtype == np.dtype(dtype))
+ result = lib.maybe_convert_objects(arr)
+ self.assertTrue(issubclass(result.dtype.type, np.complexfloating))
+
+
+class Testisscalar(tm.TestCase):
+
+ def test_isscalar_builtin_scalars(self):
+ self.assertTrue(lib.isscalar(None))
+ self.assertTrue(lib.isscalar(True))
+ self.assertTrue(lib.isscalar(False))
+ self.assertTrue(lib.isscalar(0.))
+ self.assertTrue(lib.isscalar(np.nan))
+ self.assertTrue(lib.isscalar('foobar'))
+ self.assertTrue(lib.isscalar(b'foobar'))
+ self.assertTrue(lib.isscalar(u('efoobar')))
+ self.assertTrue(lib.isscalar(datetime(2014, 1, 1)))
+ self.assertTrue(lib.isscalar(date(2014, 1, 1)))
+ self.assertTrue(lib.isscalar(time(12, 0)))
+ self.assertTrue(lib.isscalar(timedelta(hours=1)))
+ self.assertTrue(lib.isscalar(pd.NaT))
+
+ def test_isscalar_builtin_nonscalars(self):
+ self.assertFalse(lib.isscalar({}))
+ self.assertFalse(lib.isscalar([]))
+ self.assertFalse(lib.isscalar([1]))
+ self.assertFalse(lib.isscalar(()))
+ self.assertFalse(lib.isscalar((1, )))
+ self.assertFalse(lib.isscalar(slice(None)))
+ self.assertFalse(lib.isscalar(Ellipsis))
+
+ def test_isscalar_numpy_array_scalars(self):
+ self.assertTrue(lib.isscalar(np.int64(1)))
+ self.assertTrue(lib.isscalar(np.float64(1.)))
+ self.assertTrue(lib.isscalar(np.int32(1)))
+ self.assertTrue(lib.isscalar(np.object_('foobar')))
+ self.assertTrue(lib.isscalar(np.str_('foobar')))
+ self.assertTrue(lib.isscalar(np.unicode_(u('foobar'))))
+ self.assertTrue(lib.isscalar(np.bytes_(b'foobar')))
+ self.assertTrue(lib.isscalar(np.datetime64('2014-01-01')))
+ self.assertTrue(lib.isscalar(np.timedelta64(1, 'h')))
+
+ def test_isscalar_numpy_zerodim_arrays(self):
+ for zerodim in [np.array(1), np.array('foobar'),
+ np.array(np.datetime64('2014-01-01')),
+ np.array(np.timedelta64(1, 'h')),
+ np.array(np.datetime64('NaT'))]:
+ self.assertFalse(lib.isscalar(zerodim))
+ self.assertTrue(lib.isscalar(lib.item_from_zerodim(zerodim)))
+
+ def test_isscalar_numpy_arrays(self):
+ self.assertFalse(lib.isscalar(np.array([])))
+ self.assertFalse(lib.isscalar(np.array([[]])))
+ self.assertFalse(lib.isscalar(np.matrix('1; 2')))
+
+ def test_isscalar_pandas_scalars(self):
+ self.assertTrue(lib.isscalar(pd.Timestamp('2014-01-01')))
+ self.assertTrue(lib.isscalar(pd.Timedelta(hours=1)))
+ self.assertTrue(lib.isscalar(pd.Period('2014-01-01')))
+
+ def test_lisscalar_pandas_containers(self):
+ self.assertFalse(lib.isscalar(pd.Series()))
+ self.assertFalse(lib.isscalar(pd.Series([1])))
+ self.assertFalse(lib.isscalar(pd.DataFrame()))
+ self.assertFalse(lib.isscalar(pd.DataFrame([[1]])))
+ self.assertFalse(lib.isscalar(pd.Panel()))
+ self.assertFalse(lib.isscalar(pd.Panel([[[1]]])))
+ self.assertFalse(lib.isscalar(pd.Index([])))
+ self.assertFalse(lib.isscalar(pd.Index([1])))
+
+
+class TestParseSQL(tm.TestCase):
+
+ def test_convert_sql_column_floats(self):
+ arr = np.array([1.5, None, 3, 4.2], dtype=object)
+ result = lib.convert_sql_column(arr)
+ expected = np.array([1.5, np.nan, 3, 4.2], dtype='f8')
+ self.assert_numpy_array_equal(result, expected)
+
+ def test_convert_sql_column_strings(self):
+ arr = np.array(['1.5', None, '3', '4.2'], dtype=object)
+ result = lib.convert_sql_column(arr)
+ expected = np.array(['1.5', np.nan, '3', '4.2'], dtype=object)
+ self.assert_numpy_array_equal(result, expected)
+
+ def test_convert_sql_column_unicode(self):
+ arr = np.array([u('1.5'), None, u('3'), u('4.2')],
+ dtype=object)
+ result = lib.convert_sql_column(arr)
+ expected = np.array([u('1.5'), np.nan, u('3'), u('4.2')],
+ dtype=object)
+ self.assert_numpy_array_equal(result, expected)
+
+ def test_convert_sql_column_ints(self):
+ arr = np.array([1, 2, 3, 4], dtype='O')
+ arr2 = np.array([1, 2, 3, 4], dtype='i4').astype('O')
+ result = lib.convert_sql_column(arr)
+ result2 = lib.convert_sql_column(arr2)
+ expected = np.array([1, 2, 3, 4], dtype='i8')
+ self.assert_numpy_array_equal(result, expected)
+ self.assert_numpy_array_equal(result2, expected)
+
+ arr = np.array([1, 2, 3, None, 4], dtype='O')
+ result = lib.convert_sql_column(arr)
+ expected = np.array([1, 2, 3, np.nan, 4], dtype='f8')
+ self.assert_numpy_array_equal(result, expected)
+
+ def test_convert_sql_column_longs(self):
+ arr = np.array([long(1), long(2), long(3), long(4)], dtype='O')
+ result = lib.convert_sql_column(arr)
+ expected = np.array([1, 2, 3, 4], dtype='i8')
+ self.assert_numpy_array_equal(result, expected)
+
+ arr = np.array([long(1), long(2), long(3), None, long(4)], dtype='O')
+ result = lib.convert_sql_column(arr)
+ expected = np.array([1, 2, 3, np.nan, 4], dtype='f8')
+ self.assert_numpy_array_equal(result, expected)
+
+ def test_convert_sql_column_bools(self):
+ arr = np.array([True, False, True, False], dtype='O')
+ result = lib.convert_sql_column(arr)
+ expected = np.array([True, False, True, False], dtype=bool)
+ self.assert_numpy_array_equal(result, expected)
+
+ arr = np.array([True, False, None, False], dtype='O')
+ result = lib.convert_sql_column(arr)
+ expected = np.array([True, False, np.nan, False], dtype=object)
+ self.assert_numpy_array_equal(result, expected)
+
+ def test_convert_sql_column_decimals(self):
+ from decimal import Decimal
+ arr = np.array([Decimal('1.5'), None, Decimal('3'), Decimal('4.2')])
+ result = lib.convert_sql_column(arr)
+ expected = np.array([1.5, np.nan, 3, 4.2], dtype='f8')
+ self.assert_numpy_array_equal(result, expected)
+
+if __name__ == '__main__':
+ import nose
+
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
+ exit=False)
diff --git a/pandas/tests/test_lib.py b/pandas/tests/test_lib.py
index c6a703673a4c4..bfac0aa83b434 100644
--- a/pandas/tests/test_lib.py
+++ b/pandas/tests/test_lib.py
@@ -1,19 +1,9 @@
# -*- coding: utf-8 -*-
-from datetime import datetime, timedelta, date, time
-
import numpy as np
-import pandas as pd
import pandas.lib as lib
import pandas.util.testing as tm
-from pandas.compat import long, u, PY2
-
-
-def _assert_same_values_and_dtype(res, exp):
- tm.assert_equal(res.dtype, exp.dtype)
- tm.assert_almost_equal(res, exp)
-
class TestMisc(tm.TestCase):
@@ -34,16 +24,8 @@ def test_max_len_string_array(self):
tm.assertRaises(TypeError,
lambda: lib.max_len_string_array(arr.astype('U')))
- def test_infer_dtype_bytes(self):
- compare = 'string' if PY2 else 'bytes'
-
- # string array of bytes
- arr = np.array(list('abc'), dtype='S1')
- self.assertEqual(pd.lib.infer_dtype(arr), compare)
- # object array of bytes
- arr = arr.astype(object)
- self.assertEqual(pd.lib.infer_dtype(arr), compare)
+class TestIndexing(tm.TestCase):
def test_maybe_indices_to_slice_left_edge(self):
target = np.arange(100)
@@ -174,203 +156,58 @@ def test_maybe_indices_to_slice_middle(self):
self.assert_numpy_array_equal(maybe_slice, indices)
self.assert_numpy_array_equal(target[indices], target[maybe_slice])
- def test_isinf_scalar(self):
- # GH 11352
- self.assertTrue(lib.isposinf_scalar(float('inf')))
- self.assertTrue(lib.isposinf_scalar(np.inf))
- self.assertFalse(lib.isposinf_scalar(-np.inf))
- self.assertFalse(lib.isposinf_scalar(1))
- self.assertFalse(lib.isposinf_scalar('a'))
-
- self.assertTrue(lib.isneginf_scalar(float('-inf')))
- self.assertTrue(lib.isneginf_scalar(-np.inf))
- self.assertFalse(lib.isneginf_scalar(np.inf))
- self.assertFalse(lib.isneginf_scalar(1))
- self.assertFalse(lib.isneginf_scalar('a'))
-
-
-# tests related to functions imported from inference.pyx
-class TestInference(tm.TestCase):
- def test_maybe_convert_numeric_infinities(self):
- # see gh-13274
- infinities = ['inf', 'inF', 'iNf', 'Inf',
- 'iNF', 'InF', 'INf', 'INF']
- na_values = set(['', 'NULL', 'nan'])
-
- pos = np.array(['inf'], dtype=np.float64)
- neg = np.array(['-inf'], dtype=np.float64)
-
- msg = "Unable to parse string"
-
- for infinity in infinities:
- for maybe_int in (True, False):
- out = lib.maybe_convert_numeric(
- np.array([infinity], dtype=object),
- na_values, maybe_int)
- tm.assert_numpy_array_equal(out, pos)
-
- out = lib.maybe_convert_numeric(
- np.array(['-' + infinity], dtype=object),
- na_values, maybe_int)
- tm.assert_numpy_array_equal(out, neg)
-
- out = lib.maybe_convert_numeric(
- np.array([u(infinity)], dtype=object),
- na_values, maybe_int)
- tm.assert_numpy_array_equal(out, pos)
-
- out = lib.maybe_convert_numeric(
- np.array(['+' + infinity], dtype=object),
- na_values, maybe_int)
- tm.assert_numpy_array_equal(out, pos)
-
- # too many characters
- with tm.assertRaisesRegexp(ValueError, msg):
- lib.maybe_convert_numeric(
- np.array(['foo_' + infinity], dtype=object),
- na_values, maybe_int)
-
- def test_maybe_convert_numeric_post_floatify_nan(self):
- # see gh-13314
- data = np.array(['1.200', '-999.000', '4.500'], dtype=object)
- expected = np.array([1.2, np.nan, 4.5], dtype=np.float64)
- nan_values = set([-999, -999.0])
-
- for coerce_type in (True, False):
- out = lib.maybe_convert_numeric(data, nan_values, coerce_type)
- tm.assert_numpy_array_equal(out, expected)
-
-
-class Testisscalar(tm.TestCase):
-
- def test_isscalar_builtin_scalars(self):
- self.assertTrue(lib.isscalar(None))
- self.assertTrue(lib.isscalar(True))
- self.assertTrue(lib.isscalar(False))
- self.assertTrue(lib.isscalar(0.))
- self.assertTrue(lib.isscalar(np.nan))
- self.assertTrue(lib.isscalar('foobar'))
- self.assertTrue(lib.isscalar(b'foobar'))
- self.assertTrue(lib.isscalar(u('efoobar')))
- self.assertTrue(lib.isscalar(datetime(2014, 1, 1)))
- self.assertTrue(lib.isscalar(date(2014, 1, 1)))
- self.assertTrue(lib.isscalar(time(12, 0)))
- self.assertTrue(lib.isscalar(timedelta(hours=1)))
- self.assertTrue(lib.isscalar(pd.NaT))
-
- def test_isscalar_builtin_nonscalars(self):
- self.assertFalse(lib.isscalar({}))
- self.assertFalse(lib.isscalar([]))
- self.assertFalse(lib.isscalar([1]))
- self.assertFalse(lib.isscalar(()))
- self.assertFalse(lib.isscalar((1, )))
- self.assertFalse(lib.isscalar(slice(None)))
- self.assertFalse(lib.isscalar(Ellipsis))
-
- def test_isscalar_numpy_array_scalars(self):
- self.assertTrue(lib.isscalar(np.int64(1)))
- self.assertTrue(lib.isscalar(np.float64(1.)))
- self.assertTrue(lib.isscalar(np.int32(1)))
- self.assertTrue(lib.isscalar(np.object_('foobar')))
- self.assertTrue(lib.isscalar(np.str_('foobar')))
- self.assertTrue(lib.isscalar(np.unicode_(u('foobar'))))
- self.assertTrue(lib.isscalar(np.bytes_(b'foobar')))
- self.assertTrue(lib.isscalar(np.datetime64('2014-01-01')))
- self.assertTrue(lib.isscalar(np.timedelta64(1, 'h')))
-
- def test_isscalar_numpy_zerodim_arrays(self):
- for zerodim in [np.array(1), np.array('foobar'),
- np.array(np.datetime64('2014-01-01')),
- np.array(np.timedelta64(1, 'h')),
- np.array(np.datetime64('NaT'))]:
- self.assertFalse(lib.isscalar(zerodim))
- self.assertTrue(lib.isscalar(lib.item_from_zerodim(zerodim)))
-
- def test_isscalar_numpy_arrays(self):
- self.assertFalse(lib.isscalar(np.array([])))
- self.assertFalse(lib.isscalar(np.array([[]])))
- self.assertFalse(lib.isscalar(np.matrix('1; 2')))
-
- def test_isscalar_pandas_scalars(self):
- self.assertTrue(lib.isscalar(pd.Timestamp('2014-01-01')))
- self.assertTrue(lib.isscalar(pd.Timedelta(hours=1)))
- self.assertTrue(lib.isscalar(pd.Period('2014-01-01')))
-
- def test_lisscalar_pandas_containers(self):
- self.assertFalse(lib.isscalar(pd.Series()))
- self.assertFalse(lib.isscalar(pd.Series([1])))
- self.assertFalse(lib.isscalar(pd.DataFrame()))
- self.assertFalse(lib.isscalar(pd.DataFrame([[1]])))
- self.assertFalse(lib.isscalar(pd.Panel()))
- self.assertFalse(lib.isscalar(pd.Panel([[[1]]])))
- self.assertFalse(lib.isscalar(pd.Index([])))
- self.assertFalse(lib.isscalar(pd.Index([1])))
-
-
-class TestParseSQL(tm.TestCase):
-
- def test_convert_sql_column_floats(self):
- arr = np.array([1.5, None, 3, 4.2], dtype=object)
- result = lib.convert_sql_column(arr)
- expected = np.array([1.5, np.nan, 3, 4.2], dtype='f8')
- _assert_same_values_and_dtype(result, expected)
-
- def test_convert_sql_column_strings(self):
- arr = np.array(['1.5', None, '3', '4.2'], dtype=object)
- result = lib.convert_sql_column(arr)
- expected = np.array(['1.5', np.nan, '3', '4.2'], dtype=object)
- _assert_same_values_and_dtype(result, expected)
-
- def test_convert_sql_column_unicode(self):
- arr = np.array([u('1.5'), None, u('3'), u('4.2')],
- dtype=object)
- result = lib.convert_sql_column(arr)
- expected = np.array([u('1.5'), np.nan, u('3'), u('4.2')],
- dtype=object)
- _assert_same_values_and_dtype(result, expected)
-
- def test_convert_sql_column_ints(self):
- arr = np.array([1, 2, 3, 4], dtype='O')
- arr2 = np.array([1, 2, 3, 4], dtype='i4').astype('O')
- result = lib.convert_sql_column(arr)
- result2 = lib.convert_sql_column(arr2)
- expected = np.array([1, 2, 3, 4], dtype='i8')
- _assert_same_values_and_dtype(result, expected)
- _assert_same_values_and_dtype(result2, expected)
-
- arr = np.array([1, 2, 3, None, 4], dtype='O')
- result = lib.convert_sql_column(arr)
- expected = np.array([1, 2, 3, np.nan, 4], dtype='f8')
- _assert_same_values_and_dtype(result, expected)
-
- def test_convert_sql_column_longs(self):
- arr = np.array([long(1), long(2), long(3), long(4)], dtype='O')
- result = lib.convert_sql_column(arr)
- expected = np.array([1, 2, 3, 4], dtype='i8')
- _assert_same_values_and_dtype(result, expected)
-
- arr = np.array([long(1), long(2), long(3), None, long(4)], dtype='O')
- result = lib.convert_sql_column(arr)
- expected = np.array([1, 2, 3, np.nan, 4], dtype='f8')
- _assert_same_values_and_dtype(result, expected)
-
- def test_convert_sql_column_bools(self):
- arr = np.array([True, False, True, False], dtype='O')
- result = lib.convert_sql_column(arr)
- expected = np.array([True, False, True, False], dtype=bool)
- _assert_same_values_and_dtype(result, expected)
-
- arr = np.array([True, False, None, False], dtype='O')
- result = lib.convert_sql_column(arr)
- expected = np.array([True, False, np.nan, False], dtype=object)
- _assert_same_values_and_dtype(result, expected)
-
- def test_convert_sql_column_decimals(self):
- from decimal import Decimal
- arr = np.array([Decimal('1.5'), None, Decimal('3'), Decimal('4.2')])
- result = lib.convert_sql_column(arr)
- expected = np.array([1.5, np.nan, 3, 4.2], dtype='f8')
- _assert_same_values_and_dtype(result, expected)
+ def test_maybe_booleans_to_slice(self):
+ arr = np.array([0, 0, 1, 1, 1, 0, 1], dtype=np.uint8)
+ result = lib.maybe_booleans_to_slice(arr)
+ self.assertTrue(result.dtype == np.bool_)
+
+ result = lib.maybe_booleans_to_slice(arr[:0])
+ self.assertTrue(result == slice(0, 0))
+
+ def test_get_reverse_indexer(self):
+ indexer = np.array([-1, -1, 1, 2, 0, -1, 3, 4], dtype=np.int64)
+ result = lib.get_reverse_indexer(indexer, 5)
+ expected = np.array([4, 2, 3, 6, 7], dtype=np.int64)
+ self.assertTrue(np.array_equal(result, expected))
+
+
+def test_duplicated_with_nas():
+ keys = np.array([0, 1, np.nan, 0, 2, np.nan], dtype=object)
+
+ result = lib.duplicated(keys)
+ expected = [False, False, False, True, False, True]
+ assert (np.array_equal(result, expected))
+
+ result = lib.duplicated(keys, keep='first')
+ expected = [False, False, False, True, False, True]
+ assert (np.array_equal(result, expected))
+
+ result = lib.duplicated(keys, keep='last')
+ expected = [True, False, True, False, False, False]
+ assert (np.array_equal(result, expected))
+
+ result = lib.duplicated(keys, keep=False)
+ expected = [True, False, True, True, False, True]
+ assert (np.array_equal(result, expected))
+
+ keys = np.empty(8, dtype=object)
+ for i, t in enumerate(zip([0, 0, np.nan, np.nan] * 2,
+ [0, np.nan, 0, np.nan] * 2)):
+ keys[i] = t
+
+ result = lib.duplicated(keys)
+ falses = [False] * 4
+ trues = [True] * 4
+ expected = falses + trues
+ assert (np.array_equal(result, expected))
+
+ result = lib.duplicated(keys, keep='last')
+ expected = trues + falses
+ assert (np.array_equal(result, expected))
+
+ result = lib.duplicated(keys, keep=False)
+ expected = trues + trues
+ assert (np.array_equal(result, expected))
if __name__ == '__main__':
import nose
diff --git a/pandas/tests/test_tseries.py b/pandas/tests/test_tseries.py
deleted file mode 100644
index 4dd1cf54a5527..0000000000000
--- a/pandas/tests/test_tseries.py
+++ /dev/null
@@ -1,714 +0,0 @@
-# -*- coding: utf-8 -*-
-from numpy import nan
-import numpy as np
-from pandas import Index, isnull, Timestamp
-from pandas.util.testing import assert_almost_equal
-import pandas.util.testing as tm
-from pandas.compat import range, lrange, zip
-import pandas.lib as lib
-import pandas._period as period
-import pandas.algos as algos
-from pandas.core import common as com
-import datetime
-
-
-class TestTseriesUtil(tm.TestCase):
- _multiprocess_can_split_ = True
-
- def test_combineFunc(self):
- pass
-
- def test_reindex(self):
- pass
-
- def test_isnull(self):
- pass
-
- def test_groupby(self):
- pass
-
- def test_groupby_withnull(self):
- pass
-
- def test_backfill(self):
- old = Index([1, 5, 10])
- new = Index(lrange(12))
-
- filler = algos.backfill_int64(old.values, new.values)
-
- expect_filler = np.array([0, 0, 1, 1, 1, 1,
- 2, 2, 2, 2, 2, -1], dtype=np.int64)
- self.assert_numpy_array_equal(filler, expect_filler)
-
- # corner case
- old = Index([1, 4])
- new = Index(lrange(5, 10))
- filler = algos.backfill_int64(old.values, new.values)
-
- expect_filler = np.array([-1, -1, -1, -1, -1], dtype=np.int64)
- self.assert_numpy_array_equal(filler, expect_filler)
-
- def test_pad(self):
- old = Index([1, 5, 10])
- new = Index(lrange(12))
-
- filler = algos.pad_int64(old.values, new.values)
-
- expect_filler = np.array([-1, 0, 0, 0, 0, 1,
- 1, 1, 1, 1, 2, 2], dtype=np.int64)
- self.assert_numpy_array_equal(filler, expect_filler)
-
- # corner case
- old = Index([5, 10])
- new = Index(lrange(5))
- filler = algos.pad_int64(old.values, new.values)
- expect_filler = np.array([-1, -1, -1, -1, -1], dtype=np.int64)
- self.assert_numpy_array_equal(filler, expect_filler)
-
-
-def test_left_join_indexer_unique():
- a = np.array([1, 2, 3, 4, 5], dtype=np.int64)
- b = np.array([2, 2, 3, 4, 4], dtype=np.int64)
-
- result = algos.left_join_indexer_unique_int64(b, a)
- expected = np.array([1, 1, 2, 3, 3], dtype=np.int64)
- assert (np.array_equal(result, expected))
-
-
-def test_left_outer_join_bug():
- left = np.array([0, 1, 0, 1, 1, 2, 3, 1, 0, 2, 1, 2, 0, 1, 1, 2, 3, 2, 3,
- 2, 1, 1, 3, 0, 3, 2, 3, 0, 0, 2, 3, 2, 0, 3, 1, 3, 0, 1,
- 3, 0, 0, 1, 0, 3, 1, 0, 1, 0, 1, 1, 0, 2, 2, 2, 2, 2, 0,
- 3, 1, 2, 0, 0, 3, 1, 3, 2, 2, 0, 1, 3, 0, 2, 3, 2, 3, 3,
- 2, 3, 3, 1, 3, 2, 0, 0, 3, 1, 1, 1, 0, 2, 3, 3, 1, 2, 0,
- 3, 1, 2, 0, 2], dtype=np.int64)
-
- right = np.array([3, 1], dtype=np.int64)
- max_groups = 4
-
- lidx, ridx = algos.left_outer_join(left, right, max_groups, sort=False)
-
- exp_lidx = np.arange(len(left))
- exp_ridx = -np.ones(len(left))
- exp_ridx[left == 1] = 1
- exp_ridx[left == 3] = 0
-
- assert (np.array_equal(lidx, exp_lidx))
- assert (np.array_equal(ridx, exp_ridx))
-
-
-def test_inner_join_indexer():
- a = np.array([1, 2, 3, 4, 5], dtype=np.int64)
- b = np.array([0, 3, 5, 7, 9], dtype=np.int64)
-
- index, ares, bres = algos.inner_join_indexer_int64(a, b)
-
- index_exp = np.array([3, 5], dtype=np.int64)
- assert_almost_equal(index, index_exp)
-
- aexp = np.array([2, 4], dtype=np.int64)
- bexp = np.array([1, 2], dtype=np.int64)
- assert_almost_equal(ares, aexp)
- assert_almost_equal(bres, bexp)
-
- a = np.array([5], dtype=np.int64)
- b = np.array([5], dtype=np.int64)
-
- index, ares, bres = algos.inner_join_indexer_int64(a, b)
- tm.assert_numpy_array_equal(index, np.array([5], dtype=np.int64))
- tm.assert_numpy_array_equal(ares, np.array([0], dtype=np.int64))
- tm.assert_numpy_array_equal(bres, np.array([0], dtype=np.int64))
-
-
-def test_outer_join_indexer():
- a = np.array([1, 2, 3, 4, 5], dtype=np.int64)
- b = np.array([0, 3, 5, 7, 9], dtype=np.int64)
-
- index, ares, bres = algos.outer_join_indexer_int64(a, b)
-
- index_exp = np.array([0, 1, 2, 3, 4, 5, 7, 9], dtype=np.int64)
- assert_almost_equal(index, index_exp)
-
- aexp = np.array([-1, 0, 1, 2, 3, 4, -1, -1], dtype=np.int64)
- bexp = np.array([0, -1, -1, 1, -1, 2, 3, 4], dtype=np.int64)
- assert_almost_equal(ares, aexp)
- assert_almost_equal(bres, bexp)
-
- a = np.array([5], dtype=np.int64)
- b = np.array([5], dtype=np.int64)
-
- index, ares, bres = algos.outer_join_indexer_int64(a, b)
- tm.assert_numpy_array_equal(index, np.array([5], dtype=np.int64))
- tm.assert_numpy_array_equal(ares, np.array([0], dtype=np.int64))
- tm.assert_numpy_array_equal(bres, np.array([0], dtype=np.int64))
-
-
-def test_left_join_indexer():
- a = np.array([1, 2, 3, 4, 5], dtype=np.int64)
- b = np.array([0, 3, 5, 7, 9], dtype=np.int64)
-
- index, ares, bres = algos.left_join_indexer_int64(a, b)
-
- assert_almost_equal(index, a)
-
- aexp = np.array([0, 1, 2, 3, 4], dtype=np.int64)
- bexp = np.array([-1, -1, 1, -1, 2], dtype=np.int64)
- assert_almost_equal(ares, aexp)
- assert_almost_equal(bres, bexp)
-
- a = np.array([5], dtype=np.int64)
- b = np.array([5], dtype=np.int64)
-
- index, ares, bres = algos.left_join_indexer_int64(a, b)
- tm.assert_numpy_array_equal(index, np.array([5], dtype=np.int64))
- tm.assert_numpy_array_equal(ares, np.array([0], dtype=np.int64))
- tm.assert_numpy_array_equal(bres, np.array([0], dtype=np.int64))
-
-
-def test_left_join_indexer2():
- idx = Index([1, 1, 2, 5])
- idx2 = Index([1, 2, 5, 7, 9])
-
- res, lidx, ridx = algos.left_join_indexer_int64(idx2.values, idx.values)
-
- exp_res = np.array([1, 1, 2, 5, 7, 9], dtype=np.int64)
- assert_almost_equal(res, exp_res)
-
- exp_lidx = np.array([0, 0, 1, 2, 3, 4], dtype=np.int64)
- assert_almost_equal(lidx, exp_lidx)
-
- exp_ridx = np.array([0, 1, 2, 3, -1, -1], dtype=np.int64)
- assert_almost_equal(ridx, exp_ridx)
-
-
-def test_outer_join_indexer2():
- idx = Index([1, 1, 2, 5])
- idx2 = Index([1, 2, 5, 7, 9])
-
- res, lidx, ridx = algos.outer_join_indexer_int64(idx2.values, idx.values)
-
- exp_res = np.array([1, 1, 2, 5, 7, 9], dtype=np.int64)
- assert_almost_equal(res, exp_res)
-
- exp_lidx = np.array([0, 0, 1, 2, 3, 4], dtype=np.int64)
- assert_almost_equal(lidx, exp_lidx)
-
- exp_ridx = np.array([0, 1, 2, 3, -1, -1], dtype=np.int64)
- assert_almost_equal(ridx, exp_ridx)
-
-
-def test_inner_join_indexer2():
- idx = Index([1, 1, 2, 5])
- idx2 = Index([1, 2, 5, 7, 9])
-
- res, lidx, ridx = algos.inner_join_indexer_int64(idx2.values, idx.values)
-
- exp_res = np.array([1, 1, 2, 5], dtype=np.int64)
- assert_almost_equal(res, exp_res)
-
- exp_lidx = np.array([0, 0, 1, 2], dtype=np.int64)
- assert_almost_equal(lidx, exp_lidx)
-
- exp_ridx = np.array([0, 1, 2, 3], dtype=np.int64)
- assert_almost_equal(ridx, exp_ridx)
-
-
-def test_is_lexsorted():
- failure = [
- np.array([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
- 3, 3,
- 3, 3,
- 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2,
- 2, 2, 2, 2, 2, 2, 2,
- 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
- 1, 1, 1, 1, 1, 1, 1,
- 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
- 1, 1, 1, 1, 1, 1, 1,
- 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0, 0]),
- np.array([30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16,
- 15, 14,
- 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28,
- 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
- 12, 11,
- 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28, 27, 26, 25,
- 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10,
- 9, 8,
- 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28, 27, 26, 25, 24, 23, 22,
- 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7,
- 6, 5,
- 4, 3, 2, 1, 0])]
-
- assert (not algos.is_lexsorted(failure))
-
-# def test_get_group_index():
-# a = np.array([0, 1, 2, 0, 2, 1, 0, 0], dtype=np.int64)
-# b = np.array([1, 0, 3, 2, 0, 2, 3, 0], dtype=np.int64)
-# expected = np.array([1, 4, 11, 2, 8, 6, 3, 0], dtype=np.int64)
-
-# result = lib.get_group_index([a, b], (3, 4))
-
-# assert(np.array_equal(result, expected))
-
-
-def test_groupsort_indexer():
- a = np.random.randint(0, 1000, 100).astype(np.int64)
- b = np.random.randint(0, 1000, 100).astype(np.int64)
-
- result = algos.groupsort_indexer(a, 1000)[0]
-
- # need to use a stable sort
- expected = np.argsort(a, kind='mergesort')
- assert (np.array_equal(result, expected))
-
- # compare with lexsort
- key = a * 1000 + b
- result = algos.groupsort_indexer(key, 1000000)[0]
- expected = np.lexsort((b, a))
- assert (np.array_equal(result, expected))
-
-
-def test_ensure_platform_int():
- arr = np.arange(100)
-
- result = algos.ensure_platform_int(arr)
- assert (result is arr)
-
-
-def test_duplicated_with_nas():
- keys = np.array([0, 1, nan, 0, 2, nan], dtype=object)
-
- result = lib.duplicated(keys)
- expected = [False, False, False, True, False, True]
- assert (np.array_equal(result, expected))
-
- result = lib.duplicated(keys, keep='first')
- expected = [False, False, False, True, False, True]
- assert (np.array_equal(result, expected))
-
- result = lib.duplicated(keys, keep='last')
- expected = [True, False, True, False, False, False]
- assert (np.array_equal(result, expected))
-
- result = lib.duplicated(keys, keep=False)
- expected = [True, False, True, True, False, True]
- assert (np.array_equal(result, expected))
-
- keys = np.empty(8, dtype=object)
- for i, t in enumerate(zip([0, 0, nan, nan] * 2, [0, nan, 0, nan] * 2)):
- keys[i] = t
-
- result = lib.duplicated(keys)
- falses = [False] * 4
- trues = [True] * 4
- expected = falses + trues
- assert (np.array_equal(result, expected))
-
- result = lib.duplicated(keys, keep='last')
- expected = trues + falses
- assert (np.array_equal(result, expected))
-
- result = lib.duplicated(keys, keep=False)
- expected = trues + trues
- assert (np.array_equal(result, expected))
-
-
-def test_maybe_booleans_to_slice():
- arr = np.array([0, 0, 1, 1, 1, 0, 1], dtype=np.uint8)
- result = lib.maybe_booleans_to_slice(arr)
- assert (result.dtype == np.bool_)
-
- result = lib.maybe_booleans_to_slice(arr[:0])
- assert (result == slice(0, 0))
-
-
-def test_convert_objects():
- arr = np.array(['a', 'b', nan, nan, 'd', 'e', 'f'], dtype='O')
- result = lib.maybe_convert_objects(arr)
- assert (result.dtype == np.object_)
-
-
-def test_convert_infs():
- arr = np.array(['inf', 'inf', 'inf'], dtype='O')
- result = lib.maybe_convert_numeric(arr, set(), False)
- assert (result.dtype == np.float64)
-
- arr = np.array(['-inf', '-inf', '-inf'], dtype='O')
- result = lib.maybe_convert_numeric(arr, set(), False)
- assert (result.dtype == np.float64)
-
-
-def test_scientific_no_exponent():
- # See PR 12215
- arr = np.array(['42E', '2E', '99e', '6e'], dtype='O')
- result = lib.maybe_convert_numeric(arr, set(), False, True)
- assert np.all(np.isnan(result))
-
-
-def test_convert_objects_ints():
- # test that we can detect many kinds of integers
- dtypes = ['i1', 'i2', 'i4', 'i8', 'u1', 'u2', 'u4', 'u8']
-
- for dtype_str in dtypes:
- arr = np.array(list(np.arange(20, dtype=dtype_str)), dtype='O')
- assert (arr[0].dtype == np.dtype(dtype_str))
- result = lib.maybe_convert_objects(arr)
- assert (issubclass(result.dtype.type, np.integer))
-
-
-def test_convert_objects_complex_number():
- for dtype in np.sctypes['complex']:
- arr = np.array(list(1j * np.arange(20, dtype=dtype)), dtype='O')
- assert (arr[0].dtype == np.dtype(dtype))
- result = lib.maybe_convert_objects(arr)
- assert (issubclass(result.dtype.type, np.complexfloating))
-
-
-def test_rank():
- tm._skip_if_no_scipy()
- from scipy.stats import rankdata
-
- def _check(arr):
- mask = ~np.isfinite(arr)
- arr = arr.copy()
- result = algos.rank_1d_float64(arr)
- arr[mask] = np.inf
- exp = rankdata(arr)
- exp[mask] = nan
- assert_almost_equal(result, exp)
-
- _check(np.array([nan, nan, 5., 5., 5., nan, 1, 2, 3, nan]))
- _check(np.array([4., nan, 5., 5., 5., nan, 1, 2, 4., nan]))
-
-
-def test_get_reverse_indexer():
- indexer = np.array([-1, -1, 1, 2, 0, -1, 3, 4], dtype=np.int64)
- result = lib.get_reverse_indexer(indexer, 5)
- expected = np.array([4, 2, 3, 6, 7], dtype=np.int64)
- assert (np.array_equal(result, expected))
-
-
-def test_pad_backfill_object_segfault():
-
- old = np.array([], dtype='O')
- new = np.array([datetime.datetime(2010, 12, 31)], dtype='O')
-
- result = algos.pad_object(old, new)
- expected = np.array([-1], dtype=np.int64)
- assert (np.array_equal(result, expected))
-
- result = algos.pad_object(new, old)
- expected = np.array([], dtype=np.int64)
- assert (np.array_equal(result, expected))
-
- result = algos.backfill_object(old, new)
- expected = np.array([-1], dtype=np.int64)
- assert (np.array_equal(result, expected))
-
- result = algos.backfill_object(new, old)
- expected = np.array([], dtype=np.int64)
- assert (np.array_equal(result, expected))
-
-
-def test_arrmap():
- values = np.array(['foo', 'foo', 'bar', 'bar', 'baz', 'qux'], dtype='O')
- result = algos.arrmap_object(values, lambda x: x in ['foo', 'bar'])
- assert (result.dtype == np.bool_)
-
-
-def test_series_grouper():
- from pandas import Series
- obj = Series(np.random.randn(10))
- dummy = obj[:0]
-
- labels = np.array([-1, -1, -1, 0, 0, 0, 1, 1, 1, 1], dtype=np.int64)
-
- grouper = lib.SeriesGrouper(obj, np.mean, labels, 2, dummy)
- result, counts = grouper.get_result()
-
- expected = np.array([obj[3:6].mean(), obj[6:].mean()])
- assert_almost_equal(result, expected)
-
- exp_counts = np.array([3, 4], dtype=np.int64)
- assert_almost_equal(counts, exp_counts)
-
-
-def test_series_bin_grouper():
- from pandas import Series
- obj = Series(np.random.randn(10))
- dummy = obj[:0]
-
- bins = np.array([3, 6])
-
- grouper = lib.SeriesBinGrouper(obj, np.mean, bins, dummy)
- result, counts = grouper.get_result()
-
- expected = np.array([obj[:3].mean(), obj[3:6].mean(), obj[6:].mean()])
- assert_almost_equal(result, expected)
-
- exp_counts = np.array([3, 3, 4], dtype=np.int64)
- assert_almost_equal(counts, exp_counts)
-
-
-class TestBinGroupers(tm.TestCase):
- _multiprocess_can_split_ = True
-
- def setUp(self):
- self.obj = np.random.randn(10, 1)
- self.labels = np.array([0, 0, 0, 1, 1, 1, 2, 2, 2, 2], dtype=np.int64)
- self.bins = np.array([3, 6], dtype=np.int64)
-
- def test_generate_bins(self):
- from pandas.core.groupby import generate_bins_generic
- values = np.array([1, 2, 3, 4, 5, 6], dtype=np.int64)
- binner = np.array([0, 3, 6, 9], dtype=np.int64)
-
- for func in [lib.generate_bins_dt64, generate_bins_generic]:
- bins = func(values, binner, closed='left')
- assert ((bins == np.array([2, 5, 6])).all())
-
- bins = func(values, binner, closed='right')
- assert ((bins == np.array([3, 6, 6])).all())
-
- for func in [lib.generate_bins_dt64, generate_bins_generic]:
- values = np.array([1, 2, 3, 4, 5, 6], dtype=np.int64)
- binner = np.array([0, 3, 6], dtype=np.int64)
-
- bins = func(values, binner, closed='right')
- assert ((bins == np.array([3, 6])).all())
-
- self.assertRaises(ValueError, generate_bins_generic, values, [],
- 'right')
- self.assertRaises(ValueError, generate_bins_generic, values[:0],
- binner, 'right')
-
- self.assertRaises(ValueError, generate_bins_generic, values, [4],
- 'right')
- self.assertRaises(ValueError, generate_bins_generic, values, [-3, -1],
- 'right')
-
-
-def test_group_ohlc():
- def _check(dtype):
- obj = np.array(np.random.randn(20), dtype=dtype)
-
- bins = np.array([6, 12, 20])
- out = np.zeros((3, 4), dtype)
- counts = np.zeros(len(out), dtype=np.int64)
- labels = com._ensure_int64(np.repeat(np.arange(3),
- np.diff(np.r_[0, bins])))
-
- func = getattr(algos, 'group_ohlc_%s' % dtype)
- func(out, counts, obj[:, None], labels)
-
- def _ohlc(group):
- if isnull(group).all():
- return np.repeat(nan, 4)
- return [group[0], group.max(), group.min(), group[-1]]
-
- expected = np.array([_ohlc(obj[:6]), _ohlc(obj[6:12]),
- _ohlc(obj[12:])])
-
- assert_almost_equal(out, expected)
- tm.assert_numpy_array_equal(counts,
- np.array([6, 6, 8], dtype=np.int64))
-
- obj[:6] = nan
- func(out, counts, obj[:, None], labels)
- expected[0] = nan
- assert_almost_equal(out, expected)
-
- _check('float32')
- _check('float64')
-
-
-def test_try_parse_dates():
- from dateutil.parser import parse
-
- arr = np.array(['5/1/2000', '6/1/2000', '7/1/2000'], dtype=object)
-
- result = lib.try_parse_dates(arr, dayfirst=True)
- expected = [parse(d, dayfirst=True) for d in arr]
- assert (np.array_equal(result, expected))
-
-
-class TestTypeInference(tm.TestCase):
- _multiprocess_can_split_ = True
-
- def test_length_zero(self):
- result = lib.infer_dtype(np.array([], dtype='i4'))
- self.assertEqual(result, 'integer')
-
- result = lib.infer_dtype([])
- self.assertEqual(result, 'empty')
-
- def test_integers(self):
- arr = np.array([1, 2, 3, np.int64(4), np.int32(5)], dtype='O')
- result = lib.infer_dtype(arr)
- self.assertEqual(result, 'integer')
-
- arr = np.array([1, 2, 3, np.int64(4), np.int32(5), 'foo'], dtype='O')
- result = lib.infer_dtype(arr)
- self.assertEqual(result, 'mixed-integer')
-
- arr = np.array([1, 2, 3, 4, 5], dtype='i4')
- result = lib.infer_dtype(arr)
- self.assertEqual(result, 'integer')
-
- def test_bools(self):
- arr = np.array([True, False, True, True, True], dtype='O')
- result = lib.infer_dtype(arr)
- self.assertEqual(result, 'boolean')
-
- arr = np.array([np.bool_(True), np.bool_(False)], dtype='O')
- result = lib.infer_dtype(arr)
- self.assertEqual(result, 'boolean')
-
- arr = np.array([True, False, True, 'foo'], dtype='O')
- result = lib.infer_dtype(arr)
- self.assertEqual(result, 'mixed')
-
- arr = np.array([True, False, True], dtype=bool)
- result = lib.infer_dtype(arr)
- self.assertEqual(result, 'boolean')
-
- def test_floats(self):
- arr = np.array([1., 2., 3., np.float64(4), np.float32(5)], dtype='O')
- result = lib.infer_dtype(arr)
- self.assertEqual(result, 'floating')
-
- arr = np.array([1, 2, 3, np.float64(4), np.float32(5), 'foo'],
- dtype='O')
- result = lib.infer_dtype(arr)
- self.assertEqual(result, 'mixed-integer')
-
- arr = np.array([1, 2, 3, 4, 5], dtype='f4')
- result = lib.infer_dtype(arr)
- self.assertEqual(result, 'floating')
-
- arr = np.array([1, 2, 3, 4, 5], dtype='f8')
- result = lib.infer_dtype(arr)
- self.assertEqual(result, 'floating')
-
- def test_string(self):
- pass
-
- def test_unicode(self):
- pass
-
- def test_datetime(self):
-
- dates = [datetime.datetime(2012, 1, x) for x in range(1, 20)]
- index = Index(dates)
- self.assertEqual(index.inferred_type, 'datetime64')
-
- def test_date(self):
-
- dates = [datetime.date(2012, 1, x) for x in range(1, 20)]
- index = Index(dates)
- self.assertEqual(index.inferred_type, 'date')
-
- def test_to_object_array_tuples(self):
- r = (5, 6)
- values = [r]
- result = lib.to_object_array_tuples(values)
-
- try:
- # make sure record array works
- from collections import namedtuple
- record = namedtuple('record', 'x y')
- r = record(5, 6)
- values = [r]
- result = lib.to_object_array_tuples(values) # noqa
- except ImportError:
- pass
-
- def test_object(self):
-
- # GH 7431
- # cannot infer more than this as only a single element
- arr = np.array([None], dtype='O')
- result = lib.infer_dtype(arr)
- self.assertEqual(result, 'mixed')
-
- def test_categorical(self):
-
- # GH 8974
- from pandas import Categorical, Series
- arr = Categorical(list('abc'))
- result = lib.infer_dtype(arr)
- self.assertEqual(result, 'categorical')
-
- result = lib.infer_dtype(Series(arr))
- self.assertEqual(result, 'categorical')
-
- arr = Categorical(list('abc'), categories=['cegfab'], ordered=True)
- result = lib.infer_dtype(arr)
- self.assertEqual(result, 'categorical')
-
- result = lib.infer_dtype(Series(arr))
- self.assertEqual(result, 'categorical')
-
-
-class TestMoments(tm.TestCase):
- pass
-
-
-class TestReducer(tm.TestCase):
- def test_int_index(self):
- from pandas.core.series import Series
-
- arr = np.random.randn(100, 4)
- result = lib.reduce(arr, np.sum, labels=Index(np.arange(4)))
- expected = arr.sum(0)
- assert_almost_equal(result, expected)
-
- result = lib.reduce(arr, np.sum, axis=1, labels=Index(np.arange(100)))
- expected = arr.sum(1)
- assert_almost_equal(result, expected)
-
- dummy = Series(0., index=np.arange(100))
- result = lib.reduce(arr, np.sum, dummy=dummy,
- labels=Index(np.arange(4)))
- expected = arr.sum(0)
- assert_almost_equal(result, expected)
-
- dummy = Series(0., index=np.arange(4))
- result = lib.reduce(arr, np.sum, axis=1, dummy=dummy,
- labels=Index(np.arange(100)))
- expected = arr.sum(1)
- assert_almost_equal(result, expected)
-
- result = lib.reduce(arr, np.sum, axis=1, dummy=dummy,
- labels=Index(np.arange(100)))
- assert_almost_equal(result, expected)
-
-
-class TestTsUtil(tm.TestCase):
- def test_min_valid(self):
- # Ensure that Timestamp.min is a valid Timestamp
- Timestamp(Timestamp.min)
-
- def test_max_valid(self):
- # Ensure that Timestamp.max is a valid Timestamp
- Timestamp(Timestamp.max)
-
- def test_to_datetime_bijective(self):
- # Ensure that converting to datetime and back only loses precision
- # by going from nanoseconds to microseconds.
- self.assertEqual(
- Timestamp(Timestamp.max.to_pydatetime()).value / 1000,
- Timestamp.max.value / 1000)
- self.assertEqual(
- Timestamp(Timestamp.min.to_pydatetime()).value / 1000,
- Timestamp.min.value / 1000)
-
-
-class TestPeriodField(tm.TestCase):
- def test_get_period_field_raises_on_out_of_range(self):
- self.assertRaises(ValueError, period.get_period_field, -1, 0, 0)
-
- def test_get_period_field_array_raises_on_out_of_range(self):
- self.assertRaises(ValueError, period.get_period_field_arr, -1,
- np.empty(1), 0)
diff --git a/pandas/tseries/tests/test_bin_groupby.py b/pandas/tseries/tests/test_bin_groupby.py
new file mode 100644
index 0000000000000..6b6c468b7c391
--- /dev/null
+++ b/pandas/tseries/tests/test_bin_groupby.py
@@ -0,0 +1,151 @@
+# -*- coding: utf-8 -*-
+
+from numpy import nan
+import numpy as np
+
+from pandas import Index, isnull
+from pandas.util.testing import assert_almost_equal
+import pandas.util.testing as tm
+import pandas.lib as lib
+import pandas.algos as algos
+from pandas.core import common as com
+
+
+def test_series_grouper():
+ from pandas import Series
+ obj = Series(np.random.randn(10))
+ dummy = obj[:0]
+
+ labels = np.array([-1, -1, -1, 0, 0, 0, 1, 1, 1, 1], dtype=np.int64)
+
+ grouper = lib.SeriesGrouper(obj, np.mean, labels, 2, dummy)
+ result, counts = grouper.get_result()
+
+ expected = np.array([obj[3:6].mean(), obj[6:].mean()])
+ assert_almost_equal(result, expected)
+
+ exp_counts = np.array([3, 4], dtype=np.int64)
+ assert_almost_equal(counts, exp_counts)
+
+
+def test_series_bin_grouper():
+ from pandas import Series
+ obj = Series(np.random.randn(10))
+ dummy = obj[:0]
+
+ bins = np.array([3, 6])
+
+ grouper = lib.SeriesBinGrouper(obj, np.mean, bins, dummy)
+ result, counts = grouper.get_result()
+
+ expected = np.array([obj[:3].mean(), obj[3:6].mean(), obj[6:].mean()])
+ assert_almost_equal(result, expected)
+
+ exp_counts = np.array([3, 3, 4], dtype=np.int64)
+ assert_almost_equal(counts, exp_counts)
+
+
+class TestBinGroupers(tm.TestCase):
+ _multiprocess_can_split_ = True
+
+ def setUp(self):
+ self.obj = np.random.randn(10, 1)
+ self.labels = np.array([0, 0, 0, 1, 1, 1, 2, 2, 2, 2], dtype=np.int64)
+ self.bins = np.array([3, 6], dtype=np.int64)
+
+ def test_generate_bins(self):
+ from pandas.core.groupby import generate_bins_generic
+ values = np.array([1, 2, 3, 4, 5, 6], dtype=np.int64)
+ binner = np.array([0, 3, 6, 9], dtype=np.int64)
+
+ for func in [lib.generate_bins_dt64, generate_bins_generic]:
+ bins = func(values, binner, closed='left')
+ assert ((bins == np.array([2, 5, 6])).all())
+
+ bins = func(values, binner, closed='right')
+ assert ((bins == np.array([3, 6, 6])).all())
+
+ for func in [lib.generate_bins_dt64, generate_bins_generic]:
+ values = np.array([1, 2, 3, 4, 5, 6], dtype=np.int64)
+ binner = np.array([0, 3, 6], dtype=np.int64)
+
+ bins = func(values, binner, closed='right')
+ assert ((bins == np.array([3, 6])).all())
+
+ self.assertRaises(ValueError, generate_bins_generic, values, [],
+ 'right')
+ self.assertRaises(ValueError, generate_bins_generic, values[:0],
+ binner, 'right')
+
+ self.assertRaises(ValueError, generate_bins_generic, values, [4],
+ 'right')
+ self.assertRaises(ValueError, generate_bins_generic, values, [-3, -1],
+ 'right')
+
+
+def test_group_ohlc():
+ def _check(dtype):
+ obj = np.array(np.random.randn(20), dtype=dtype)
+
+ bins = np.array([6, 12, 20])
+ out = np.zeros((3, 4), dtype)
+ counts = np.zeros(len(out), dtype=np.int64)
+ labels = com._ensure_int64(np.repeat(np.arange(3),
+ np.diff(np.r_[0, bins])))
+
+ func = getattr(algos, 'group_ohlc_%s' % dtype)
+ func(out, counts, obj[:, None], labels)
+
+ def _ohlc(group):
+ if isnull(group).all():
+ return np.repeat(nan, 4)
+ return [group[0], group.max(), group.min(), group[-1]]
+
+ expected = np.array([_ohlc(obj[:6]), _ohlc(obj[6:12]),
+ _ohlc(obj[12:])])
+
+ assert_almost_equal(out, expected)
+ tm.assert_numpy_array_equal(counts,
+ np.array([6, 6, 8], dtype=np.int64))
+
+ obj[:6] = nan
+ func(out, counts, obj[:, None], labels)
+ expected[0] = nan
+ assert_almost_equal(out, expected)
+
+ _check('float32')
+ _check('float64')
+
+
+class TestMoments(tm.TestCase):
+ pass
+
+
+class TestReducer(tm.TestCase):
+ def test_int_index(self):
+ from pandas.core.series import Series
+
+ arr = np.random.randn(100, 4)
+ result = lib.reduce(arr, np.sum, labels=Index(np.arange(4)))
+ expected = arr.sum(0)
+ assert_almost_equal(result, expected)
+
+ result = lib.reduce(arr, np.sum, axis=1, labels=Index(np.arange(100)))
+ expected = arr.sum(1)
+ assert_almost_equal(result, expected)
+
+ dummy = Series(0., index=np.arange(100))
+ result = lib.reduce(arr, np.sum, dummy=dummy,
+ labels=Index(np.arange(4)))
+ expected = arr.sum(0)
+ assert_almost_equal(result, expected)
+
+ dummy = Series(0., index=np.arange(4))
+ result = lib.reduce(arr, np.sum, axis=1, dummy=dummy,
+ labels=Index(np.arange(100)))
+ expected = arr.sum(1)
+ assert_almost_equal(result, expected)
+
+ result = lib.reduce(arr, np.sum, axis=1, dummy=dummy,
+ labels=Index(np.arange(100)))
+ assert_almost_equal(result, expected)
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index 8e6d339b87623..de23306c80b71 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -8,7 +8,7 @@
from datetime import datetime, date, timedelta
-from pandas import Timestamp
+from pandas import Timestamp, _period
from pandas.tseries.frequencies import MONTHS, DAYS, _period_code_map
from pandas.tseries.period import Period, PeriodIndex, period_range
from pandas.tseries.index import DatetimeIndex, date_range, Index
@@ -4450,6 +4450,14 @@ def test_ops_frame_period(self):
tm.assert_frame_equal(df - df2, -exp)
+class TestPeriodField(tm.TestCase):
+ def test_get_period_field_raises_on_out_of_range(self):
+ self.assertRaises(ValueError, _period.get_period_field, -1, 0, 0)
+
+ def test_get_period_field_array_raises_on_out_of_range(self):
+ self.assertRaises(ValueError, _period.get_period_field_arr, -1,
+ np.empty(1), 0)
+
if __name__ == '__main__':
import nose
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
diff --git a/pandas/tseries/tests/test_tslib.py b/pandas/tseries/tests/test_tslib.py
index d7426daa794c3..c6436163b9edb 100644
--- a/pandas/tseries/tests/test_tslib.py
+++ b/pandas/tseries/tests/test_tslib.py
@@ -2,7 +2,7 @@
from distutils.version import LooseVersion
import numpy as np
-from pandas import tslib
+from pandas import tslib, lib
import pandas._period as period
import datetime
@@ -25,6 +25,35 @@
from pandas.util.testing import assert_series_equal, _skip_if_has_locale
+class TestTsUtil(tm.TestCase):
+
+ def test_try_parse_dates(self):
+ from dateutil.parser import parse
+ arr = np.array(['5/1/2000', '6/1/2000', '7/1/2000'], dtype=object)
+
+ result = lib.try_parse_dates(arr, dayfirst=True)
+ expected = [parse(d, dayfirst=True) for d in arr]
+ self.assertTrue(np.array_equal(result, expected))
+
+ def test_min_valid(self):
+ # Ensure that Timestamp.min is a valid Timestamp
+ Timestamp(Timestamp.min)
+
+ def test_max_valid(self):
+ # Ensure that Timestamp.max is a valid Timestamp
+ Timestamp(Timestamp.max)
+
+ def test_to_datetime_bijective(self):
+ # Ensure that converting to datetime and back only loses precision
+ # by going from nanoseconds to microseconds.
+ self.assertEqual(
+ Timestamp(Timestamp.max.to_pydatetime()).value / 1000,
+ Timestamp.max.value / 1000)
+ self.assertEqual(
+ Timestamp(Timestamp.min.to_pydatetime()).value / 1000,
+ Timestamp.min.value / 1000)
+
+
class TestTimestamp(tm.TestCase):
def test_constructor(self):
| https://api.github.com/repos/pandas-dev/pandas/pulls/13325 | 2016-05-30T14:13:57Z | 2016-05-30T14:41:44Z | null | 2016-05-30T14:41:44Z |
|
API: Deprecate compact_ints and use_unsigned in read_csv | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 6cf41bbc50fb5..4eb42e1fb918d 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -176,6 +176,17 @@ low_memory : boolean, default ``True``
Note that the entire file is read into a single DataFrame regardless,
use the ``chunksize`` or ``iterator`` parameter to return the data in chunks.
(Only valid with C parser)
+compact_ints : boolean, default False
+ DEPRECATED: this argument will be removed in a future version
+
+ If ``compact_ints`` is ``True``, then for any column that is of integer dtype, the
+ parser will attempt to cast it as the smallest integer ``dtype`` possible, either
+ signed or unsigned depending on the specification from the ``use_unsigned`` parameter.
+use_unsigned : boolean, default False
+ DEPRECATED: this argument will be removed in a future version
+
+ If integer columns are being compacted (i.e. ``compact_ints=True``), specify whether
+ the column should be compacted to the smallest signed or unsigned integer dtype.
NA and Missing Data Handling
++++++++++++++++++++++++++++
diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 27540a9626398..03fe8bdc39611 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -292,6 +292,7 @@ Other API changes
Deprecations
^^^^^^^^^^^^
+- ``compact_ints`` and ``use_unsigned`` have been deprecated in ``pd.read_csv`` and will be removed in a future version (:issue:`13320`)
.. _whatsnew_0182.performance:
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index bba8ad3ccd72b..2c8726f588522 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -227,6 +227,20 @@
Note that the entire file is read into a single DataFrame regardless,
use the `chunksize` or `iterator` parameter to return the data in chunks.
(Only valid with C parser)
+compact_ints : boolean, default False
+ DEPRECATED: this argument will be removed in a future version
+
+ If compact_ints is True, then for any column that is of integer dtype,
+ the parser will attempt to cast it as the smallest integer dtype possible,
+ either signed or unsigned depending on the specification from the
+ `use_unsigned` parameter.
+
+use_unsigned : boolean, default False
+ DEPRECATED: this argument will be removed in a future version
+
+ If integer columns are being compacted (i.e. `compact_ints=True`), specify
+ whether the column should be compacted to the smallest signed or unsigned
+ integer dtype.
Returns
-------
@@ -425,8 +439,6 @@ def _read(filepath_or_buffer, kwds):
_c_unsupported = set(['skip_footer'])
_python_unsupported = set([
'as_recarray',
- 'compact_ints',
- 'use_unsigned',
'low_memory',
'memory_map',
'buffer_lines',
@@ -435,6 +447,10 @@ def _read(filepath_or_buffer, kwds):
'dtype',
'float_precision',
])
+_deprecated_args = set([
+ 'compact_ints',
+ 'use_unsigned',
+])
def _make_parser_function(name, sep=','):
@@ -789,6 +805,12 @@ def _clean_options(self, options, engine):
_validate_header_arg(options['header'])
+ for arg in _deprecated_args:
+ if result[arg] != _c_parser_defaults[arg]:
+ warnings.warn("The '{arg}' argument has been deprecated "
+ "and will be removed in a future version"
+ .format(arg=arg), FutureWarning, stacklevel=2)
+
if index_col is True:
raise ValueError("The value of index_col couldn't be 'True'")
if _is_index_col(index_col):
@@ -1206,6 +1228,12 @@ def _convert_to_ndarrays(self, dct, na_values, na_fvalues, verbose=False,
cvals, na_count = self._convert_types(
values, set(col_na_values) | col_na_fvalues, coerce_type)
+
+ if issubclass(cvals.dtype.type, np.integer) and self.compact_ints:
+ cvals = lib.downcast_int64(
+ cvals, _parser.na_values,
+ self.use_unsigned)
+
result[c] = cvals
if verbose and na_count:
print('Filled %d NA values in column %s' % (na_count, str(c)))
@@ -1648,8 +1676,11 @@ def __init__(self, f, **kwds):
self.verbose = kwds['verbose']
self.converters = kwds['converters']
+ self.compact_ints = kwds['compact_ints']
+ self.use_unsigned = kwds['use_unsigned']
self.thousands = kwds['thousands']
self.decimal = kwds['decimal']
+
self.comment = kwds['comment']
self._comment_lines = []
diff --git a/pandas/io/tests/parser/c_parser_only.py b/pandas/io/tests/parser/c_parser_only.py
index 7fca37cef473e..b7ef754004e18 100644
--- a/pandas/io/tests/parser/c_parser_only.py
+++ b/pandas/io/tests/parser/c_parser_only.py
@@ -172,28 +172,8 @@ def error(val):
self.assertTrue(sum(precise_errors) <= sum(normal_errors))
self.assertTrue(max(precise_errors) <= max(normal_errors))
- def test_compact_ints(self):
- if compat.is_platform_windows() and not self.low_memory:
- raise nose.SkipTest(
- "segfaults on win-64, only when all tests are run")
-
- data = ('0,1,0,0\n'
- '1,1,0,0\n'
- '0,1,0,1')
-
- result = self.read_csv(StringIO(data), delimiter=',', header=None,
- compact_ints=True, as_recarray=True)
- ex_dtype = np.dtype([(str(i), 'i1') for i in range(4)])
- self.assertEqual(result.dtype, ex_dtype)
-
- result = self.read_csv(StringIO(data), delimiter=',', header=None,
- as_recarray=True, compact_ints=True,
- use_unsigned=True)
- ex_dtype = np.dtype([(str(i), 'u1') for i in range(4)])
- self.assertEqual(result.dtype, ex_dtype)
-
def test_compact_ints_as_recarray(self):
- if compat.is_platform_windows() and self.low_memory:
+ if compat.is_platform_windows():
raise nose.SkipTest(
"segfaults on win-64, only when all tests are run")
@@ -201,16 +181,20 @@ def test_compact_ints_as_recarray(self):
'1,1,0,0\n'
'0,1,0,1')
- result = self.read_csv(StringIO(data), delimiter=',', header=None,
- compact_ints=True, as_recarray=True)
- ex_dtype = np.dtype([(str(i), 'i1') for i in range(4)])
- self.assertEqual(result.dtype, ex_dtype)
-
- result = self.read_csv(StringIO(data), delimiter=',', header=None,
- as_recarray=True, compact_ints=True,
- use_unsigned=True)
- ex_dtype = np.dtype([(str(i), 'u1') for i in range(4)])
- self.assertEqual(result.dtype, ex_dtype)
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
+ result = self.read_csv(StringIO(data), delimiter=',', header=None,
+ compact_ints=True, as_recarray=True)
+ ex_dtype = np.dtype([(str(i), 'i1') for i in range(4)])
+ self.assertEqual(result.dtype, ex_dtype)
+
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
+ result = self.read_csv(StringIO(data), delimiter=',', header=None,
+ as_recarray=True, compact_ints=True,
+ use_unsigned=True)
+ ex_dtype = np.dtype([(str(i), 'u1') for i in range(4)])
+ self.assertEqual(result.dtype, ex_dtype)
def test_pass_dtype(self):
data = """\
diff --git a/pandas/io/tests/parser/common.py b/pandas/io/tests/parser/common.py
index 44892dc17c47b..f8c7241fdf88a 100644
--- a/pandas/io/tests/parser/common.py
+++ b/pandas/io/tests/parser/common.py
@@ -1330,3 +1330,46 @@ def test_raise_on_no_columns(self):
# test with more than a single newline
data = "\n\n\n"
self.assertRaises(EmptyDataError, self.read_csv, StringIO(data))
+
+ def test_compact_ints_use_unsigned(self):
+ # see gh-13323
+ data = 'a,b,c\n1,9,258'
+
+ # sanity check
+ expected = DataFrame({
+ 'a': np.array([1], dtype=np.int64),
+ 'b': np.array([9], dtype=np.int64),
+ 'c': np.array([258], dtype=np.int64),
+ })
+ out = self.read_csv(StringIO(data))
+ tm.assert_frame_equal(out, expected)
+
+ expected = DataFrame({
+ 'a': np.array([1], dtype=np.int8),
+ 'b': np.array([9], dtype=np.int8),
+ 'c': np.array([258], dtype=np.int16),
+ })
+
+ # default behaviour for 'use_unsigned'
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
+ out = self.read_csv(StringIO(data), compact_ints=True)
+ tm.assert_frame_equal(out, expected)
+
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
+ out = self.read_csv(StringIO(data), compact_ints=True,
+ use_unsigned=False)
+ tm.assert_frame_equal(out, expected)
+
+ expected = DataFrame({
+ 'a': np.array([1], dtype=np.uint8),
+ 'b': np.array([9], dtype=np.uint8),
+ 'c': np.array([258], dtype=np.uint16),
+ })
+
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
+ out = self.read_csv(StringIO(data), compact_ints=True,
+ use_unsigned=True)
+ tm.assert_frame_equal(out, expected)
diff --git a/pandas/io/tests/parser/test_unsupported.py b/pandas/io/tests/parser/test_unsupported.py
index 3c1c45831e7b4..e820924d2be8b 100644
--- a/pandas/io/tests/parser/test_unsupported.py
+++ b/pandas/io/tests/parser/test_unsupported.py
@@ -117,6 +117,27 @@ def test_python_engine(self):
with tm.assertRaisesRegexp(ValueError, msg):
read_csv(StringIO(data), engine=engine, **kwargs)
+
+class TestDeprecatedFeatures(tm.TestCase):
+ def test_deprecated_args(self):
+ data = '1,2,3'
+
+ # deprecated arguments with non-default values
+ deprecated = {
+ 'compact_ints': True,
+ 'use_unsigned': True,
+ }
+
+ engines = 'c', 'python'
+
+ for engine in engines:
+ for arg, non_default_val in deprecated.items():
+ with tm.assert_produces_warning(
+ FutureWarning, check_stacklevel=False):
+ kwargs = {arg: non_default_val}
+ read_csv(StringIO(data), engine=engine,
+ **kwargs)
+
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/parser.pyx b/pandas/parser.pyx
index 729e5af528b80..d7ddaee658fe7 100644
--- a/pandas/parser.pyx
+++ b/pandas/parser.pyx
@@ -1018,7 +1018,7 @@ cdef class TextReader:
col_res = _maybe_upcast(col_res)
if issubclass(col_res.dtype.type, np.integer) and self.compact_ints:
- col_res = downcast_int64(col_res, self.use_unsigned)
+ col_res = lib.downcast_int64(col_res, na_values, self.use_unsigned)
if col_res is None:
raise CParserError('Unable to parse column %d' % i)
@@ -1866,76 +1866,6 @@ cdef raise_parser_error(object base, parser_t *parser):
raise CParserError(message)
-def downcast_int64(ndarray[int64_t] arr, bint use_unsigned=0):
- cdef:
- Py_ssize_t i, n = len(arr)
- int64_t mx = INT64_MIN + 1, mn = INT64_MAX
- int64_t NA = na_values[np.int64]
- int64_t val
- ndarray[uint8_t] mask
- int na_count = 0
-
- _mask = np.empty(n, dtype=bool)
- mask = _mask.view(np.uint8)
-
- for i in range(n):
- val = arr[i]
-
- if val == NA:
- mask[i] = 1
- na_count += 1
- continue
-
- # not NA
- mask[i] = 0
-
- if val > mx:
- mx = val
-
- if val < mn:
- mn = val
-
- if mn >= 0 and use_unsigned:
- if mx <= UINT8_MAX - 1:
- result = arr.astype(np.uint8)
- if na_count:
- np.putmask(result, _mask, na_values[np.uint8])
- return result
-
- if mx <= UINT16_MAX - 1:
- result = arr.astype(np.uint16)
- if na_count:
- np.putmask(result, _mask, na_values[np.uint16])
- return result
-
- if mx <= UINT32_MAX - 1:
- result = arr.astype(np.uint32)
- if na_count:
- np.putmask(result, _mask, na_values[np.uint32])
- return result
-
- else:
- if mn >= INT8_MIN + 1 and mx <= INT8_MAX:
- result = arr.astype(np.int8)
- if na_count:
- np.putmask(result, _mask, na_values[np.int8])
- return result
-
- if mn >= INT16_MIN + 1 and mx <= INT16_MAX:
- result = arr.astype(np.int16)
- if na_count:
- np.putmask(result, _mask, na_values[np.int16])
- return result
-
- if mn >= INT32_MIN + 1 and mx <= INT32_MAX:
- result = arr.astype(np.int32)
- if na_count:
- np.putmask(result, _mask, na_values[np.int32])
- return result
-
- return arr
-
-
def _concatenate_chunks(list chunks):
cdef:
list names = list(chunks[0].keys())
diff --git a/pandas/src/inference.pyx b/pandas/src/inference.pyx
index 5f7c5478b5d87..262e036ff44f1 100644
--- a/pandas/src/inference.pyx
+++ b/pandas/src/inference.pyx
@@ -6,6 +6,20 @@ iNaT = util.get_nat()
cdef bint PY2 = sys.version_info[0] == 2
+cdef extern from "headers/stdint.h":
+ enum: UINT8_MAX
+ enum: UINT16_MAX
+ enum: UINT32_MAX
+ enum: UINT64_MAX
+ enum: INT8_MIN
+ enum: INT8_MAX
+ enum: INT16_MIN
+ enum: INT16_MAX
+ enum: INT32_MAX
+ enum: INT32_MIN
+ enum: INT64_MAX
+ enum: INT64_MIN
+
# core.common import for fast inference checks
def is_float(object obj):
return util.is_float_object(obj)
@@ -1240,3 +1254,74 @@ def fast_multiget(dict mapping, ndarray keys, default=np.nan):
output[i] = default
return maybe_convert_objects(output)
+
+
+def downcast_int64(ndarray[int64_t] arr, object na_values,
+ bint use_unsigned=0):
+ cdef:
+ Py_ssize_t i, n = len(arr)
+ int64_t mx = INT64_MIN + 1, mn = INT64_MAX
+ int64_t NA = na_values[np.int64]
+ int64_t val
+ ndarray[uint8_t] mask
+ int na_count = 0
+
+ _mask = np.empty(n, dtype=bool)
+ mask = _mask.view(np.uint8)
+
+ for i in range(n):
+ val = arr[i]
+
+ if val == NA:
+ mask[i] = 1
+ na_count += 1
+ continue
+
+ # not NA
+ mask[i] = 0
+
+ if val > mx:
+ mx = val
+
+ if val < mn:
+ mn = val
+
+ if mn >= 0 and use_unsigned:
+ if mx <= UINT8_MAX - 1:
+ result = arr.astype(np.uint8)
+ if na_count:
+ np.putmask(result, _mask, na_values[np.uint8])
+ return result
+
+ if mx <= UINT16_MAX - 1:
+ result = arr.astype(np.uint16)
+ if na_count:
+ np.putmask(result, _mask, na_values[np.uint16])
+ return result
+
+ if mx <= UINT32_MAX - 1:
+ result = arr.astype(np.uint32)
+ if na_count:
+ np.putmask(result, _mask, na_values[np.uint32])
+ return result
+
+ else:
+ if mn >= INT8_MIN + 1 and mx <= INT8_MAX:
+ result = arr.astype(np.int8)
+ if na_count:
+ np.putmask(result, _mask, na_values[np.int8])
+ return result
+
+ if mn >= INT16_MIN + 1 and mx <= INT16_MAX:
+ result = arr.astype(np.int16)
+ if na_count:
+ np.putmask(result, _mask, na_values[np.int16])
+ return result
+
+ if mn >= INT32_MIN + 1 and mx <= INT32_MAX:
+ result = arr.astype(np.int32)
+ if na_count:
+ np.putmask(result, _mask, na_values[np.int32])
+ return result
+
+ return arr
diff --git a/pandas/tests/test_infer_and_convert.py b/pandas/tests/test_infer_and_convert.py
index 68eac12e5ec4c..a6941369b35be 100644
--- a/pandas/tests/test_infer_and_convert.py
+++ b/pandas/tests/test_infer_and_convert.py
@@ -401,6 +401,42 @@ def test_convert_sql_column_decimals(self):
expected = np.array([1.5, np.nan, 3, 4.2], dtype='f8')
self.assert_numpy_array_equal(result, expected)
+ def test_convert_downcast_int64(self):
+ from pandas.parser import na_values
+
+ arr = np.array([1, 2, 7, 8, 10], dtype=np.int64)
+ expected = np.array([1, 2, 7, 8, 10], dtype=np.int8)
+
+ # default argument
+ result = lib.downcast_int64(arr, na_values)
+ self.assert_numpy_array_equal(result, expected)
+
+ result = lib.downcast_int64(arr, na_values, use_unsigned=False)
+ self.assert_numpy_array_equal(result, expected)
+
+ expected = np.array([1, 2, 7, 8, 10], dtype=np.uint8)
+ result = lib.downcast_int64(arr, na_values, use_unsigned=True)
+ self.assert_numpy_array_equal(result, expected)
+
+ # still cast to int8 despite use_unsigned=True
+ # because of the negative number as an element
+ arr = np.array([1, 2, -7, 8, 10], dtype=np.int64)
+ expected = np.array([1, 2, -7, 8, 10], dtype=np.int8)
+ result = lib.downcast_int64(arr, na_values, use_unsigned=True)
+ self.assert_numpy_array_equal(result, expected)
+
+ arr = np.array([1, 2, 7, 8, 300], dtype=np.int64)
+ expected = np.array([1, 2, 7, 8, 300], dtype=np.int16)
+ result = lib.downcast_int64(arr, na_values)
+ self.assert_numpy_array_equal(result, expected)
+
+ int8_na = na_values[np.int8]
+ int64_na = na_values[np.int64]
+ arr = np.array([int64_na, 2, 3, 10, 15], dtype=np.int64)
+ expected = np.array([int8_na, 2, 3, 10, 15], dtype=np.int8)
+ result = lib.downcast_int64(arr, na_values)
+ self.assert_numpy_array_equal(result, expected)
+
if __name__ == '__main__':
import nose
| Title is self-explanatory.
xref #12686 - I don't quite understand why these are marked (if at all) as internal to the C engine only, as the benefits for having these options accepted for the Python engine is quite clear based on the documentation I added as well.
Implementation simply just calls the already-written function in `pandas/parsers.pyx` - as it isn't specific to the `TextReader` class, crossing over to grab this function from Cython (instead of duplicating in pure Python) seems reasonable while maintaining that separation between the C and Python engines.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13323 | 2016-05-30T01:53:56Z | 2016-06-02T23:16:42Z | null | 2016-06-02T23:58:48Z |
ENH: add support for na_filter in Python engine | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 2b67aca1dcf74..be38adb96e403 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -75,6 +75,7 @@ Other enhancements
pd.Timestamp(year=2012, month=1, day=1, hour=8, minute=30)
- The ``pd.read_csv()`` with ``engine='python'`` has gained support for the ``decimal`` option (:issue:`12933`)
+- The ``pd.read_csv()`` with ``engine='python'`` has gained support for the ``na_filter`` option (:issue:`13321`)
- ``Index.astype()`` now accepts an optional boolean argument ``copy``, which allows optional copying if the requirements on dtype are satisfied (:issue:`13209`)
- ``Index`` now supports the ``.where()`` function for same shape indexing (:issue:`13170`)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index bf4083f61155c..394fe1a98880a 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -425,7 +425,6 @@ def _read(filepath_or_buffer, kwds):
_c_unsupported = set(['skip_footer'])
_python_unsupported = set([
'as_recarray',
- 'na_filter',
'compact_ints',
'use_unsigned',
'low_memory',
@@ -1188,8 +1187,13 @@ def _convert_to_ndarrays(self, dct, na_values, na_fvalues, verbose=False,
result = {}
for c, values in compat.iteritems(dct):
conv_f = None if converters is None else converters.get(c, None)
- col_na_values, col_na_fvalues = _get_na_values(c, na_values,
- na_fvalues)
+
+ if self.na_filter:
+ col_na_values, col_na_fvalues = _get_na_values(
+ c, na_values, na_fvalues)
+ else:
+ col_na_values, col_na_fvalues = set(), set()
+
coerce_type = True
if conv_f is not None:
try:
@@ -1634,6 +1638,8 @@ def __init__(self, f, **kwds):
self.names_passed = kwds['names'] or None
+ self.na_filter = kwds['na_filter']
+
self.has_index_names = False
if 'has_index_names' in kwds:
self.has_index_names = kwds['has_index_names']
diff --git a/pandas/io/tests/parser/c_parser_only.py b/pandas/io/tests/parser/c_parser_only.py
index 9dde669c9d39d..00c4e0a1c022b 100644
--- a/pandas/io/tests/parser/c_parser_only.py
+++ b/pandas/io/tests/parser/c_parser_only.py
@@ -61,12 +61,6 @@ def test_delim_whitespace_custom_terminator(self):
columns=['a', 'b', 'c'])
tm.assert_frame_equal(df, expected)
- def test_parse_dates_empty_string(self):
- # see gh-2263
- s = StringIO("Date, test\n2012-01-01, 1\n,2")
- result = self.read_csv(s, parse_dates=["Date"], na_filter=False)
- self.assertTrue(result['Date'].isnull()[1])
-
def test_dtype_and_names_error(self):
# see gh-8833: passing both dtype and names
# resulting in an error reporting issue
diff --git a/pandas/io/tests/parser/common.py b/pandas/io/tests/parser/common.py
index 2e3c102948cfa..44892dc17c47b 100644
--- a/pandas/io/tests/parser/common.py
+++ b/pandas/io/tests/parser/common.py
@@ -1319,10 +1319,8 @@ def test_inf_parsing(self):
df = self.read_csv(StringIO(data), index_col=0)
tm.assert_almost_equal(df['A'].values, expected.values)
- if self.engine == 'c':
- # TODO: remove condition when 'na_filter' is supported for Python
- df = self.read_csv(StringIO(data), index_col=0, na_filter=False)
- tm.assert_almost_equal(df['A'].values, expected.values)
+ df = self.read_csv(StringIO(data), index_col=0, na_filter=False)
+ tm.assert_almost_equal(df['A'].values, expected.values)
def test_raise_on_no_columns(self):
# single newline
diff --git a/pandas/io/tests/parser/na_values.py b/pandas/io/tests/parser/na_values.py
index 4705fd08af2b4..d826ae536c6cc 100644
--- a/pandas/io/tests/parser/na_values.py
+++ b/pandas/io/tests/parser/na_values.py
@@ -223,3 +223,21 @@ def test_na_values_keep_default(self):
'Three': ['None', 'two', 'None', 'nan', 'five', '',
'seven']})
tm.assert_frame_equal(xp.reindex(columns=df.columns), df)
+
+ def test_na_values_na_filter_override(self):
+ data = """\
+A,B
+1,A
+nan,B
+3,C
+"""
+
+ expected = DataFrame([[1, 'A'], [np.nan, np.nan], [3, 'C']],
+ columns=['A', 'B'])
+ out = self.read_csv(StringIO(data), na_values=['B'], na_filter=True)
+ tm.assert_frame_equal(out, expected)
+
+ expected = DataFrame([['1', 'A'], ['nan', 'B'], ['3', 'C']],
+ columns=['A', 'B'])
+ out = self.read_csv(StringIO(data), na_values=['B'], na_filter=False)
+ tm.assert_frame_equal(out, expected)
diff --git a/pandas/io/tests/parser/parse_dates.py b/pandas/io/tests/parser/parse_dates.py
index ec368bb358ad5..01816bde66120 100644
--- a/pandas/io/tests/parser/parse_dates.py
+++ b/pandas/io/tests/parser/parse_dates.py
@@ -467,3 +467,10 @@ def test_read_with_parse_dates_invalid_type(self):
StringIO(data), parse_dates=np.array([4, 5]))
tm.assertRaisesRegexp(TypeError, errmsg, self.read_csv,
StringIO(data), parse_dates=set([1, 3, 3]))
+
+ def test_parse_dates_empty_string(self):
+ # see gh-2263
+ data = "Date, test\n2012-01-01, 1\n,2"
+ result = self.read_csv(StringIO(data), parse_dates=["Date"],
+ na_filter=False)
+ self.assertTrue(result['Date'].isnull()[1])
| Title is self-explanatory.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13321 | 2016-05-29T23:36:32Z | 2016-05-31T13:14:26Z | null | 2016-05-31T13:16:53Z |
BUG: Parse trailing NaN values for the Python parser | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 33a48671a9b65..7736a26bb6947 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -349,6 +349,7 @@ Bug Fixes
- Bug in ``pd.read_csv()`` with ``engine='python'`` in which infinities of mixed-case forms were not being interpreted properly (:issue:`13274`)
+- Bug in ``pd.read_csv()`` with ``engine='python'`` in which trailing ``NaN`` values were not being parsed (:issue:`13320`)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 394fe1a98880a..1f0155c4cc7a0 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -2226,14 +2226,16 @@ def _get_index_name(self, columns):
return index_name, orig_names, columns
def _rows_to_cols(self, content):
- zipped_content = list(lib.to_object_array(content).T)
-
col_len = self.num_original_columns
- zip_len = len(zipped_content)
if self._implicit_index:
col_len += len(self.index_col)
+ # see gh-13320
+ zipped_content = list(lib.to_object_array(
+ content, min_width=col_len).T)
+ zip_len = len(zipped_content)
+
if self.skip_footer < 0:
raise ValueError('skip footer cannot be negative')
diff --git a/pandas/io/tests/parser/c_parser_only.py b/pandas/io/tests/parser/c_parser_only.py
index 00c4e0a1c022b..7fca37cef473e 100644
--- a/pandas/io/tests/parser/c_parser_only.py
+++ b/pandas/io/tests/parser/c_parser_only.py
@@ -360,15 +360,6 @@ def test_raise_on_passed_int_dtype_with_nas(self):
sep=",", skipinitialspace=True,
dtype={'DOY': np.int64})
- def test_na_trailing_columns(self):
- data = """Date,Currenncy,Symbol,Type,Units,UnitPrice,Cost,Tax
-2012-03-14,USD,AAPL,BUY,1000
-2012-05-12,USD,SBUX,SELL,500"""
-
- result = self.read_csv(StringIO(data))
- self.assertEqual(result['Date'][1], '2012-05-12')
- self.assertTrue(result['UnitPrice'].isnull().all())
-
def test_parse_ragged_csv(self):
data = """1,2,3
1,2,3,4
diff --git a/pandas/io/tests/parser/na_values.py b/pandas/io/tests/parser/na_values.py
index d826ae536c6cc..2a8c934abce61 100644
--- a/pandas/io/tests/parser/na_values.py
+++ b/pandas/io/tests/parser/na_values.py
@@ -241,3 +241,12 @@ def test_na_values_na_filter_override(self):
columns=['A', 'B'])
out = self.read_csv(StringIO(data), na_values=['B'], na_filter=False)
tm.assert_frame_equal(out, expected)
+
+ def test_na_trailing_columns(self):
+ data = """Date,Currenncy,Symbol,Type,Units,UnitPrice,Cost,Tax
+2012-03-14,USD,AAPL,BUY,1000
+2012-05-12,USD,SBUX,SELL,500"""
+
+ result = self.read_csv(StringIO(data))
+ self.assertEqual(result['Date'][1], '2012-05-12')
+ self.assertTrue(result['UnitPrice'].isnull().all())
diff --git a/pandas/src/inference.pyx b/pandas/src/inference.pyx
index d4e149eb09b65..5f7c5478b5d87 100644
--- a/pandas/src/inference.pyx
+++ b/pandas/src/inference.pyx
@@ -1132,7 +1132,24 @@ def map_infer(ndarray arr, object f, bint convert=1):
return result
-def to_object_array(list rows):
+def to_object_array(list rows, int min_width=0):
+ """
+ Convert a list of lists into an object array.
+
+ Parameters
+ ----------
+ rows : 2-d array (N, K)
+ A list of lists to be converted into an array
+ min_width : int
+ The minimum width of the object array. If a list
+ in `rows` contains fewer than `width` elements,
+ the remaining elements in the corresponding row
+ will all be `NaN`.
+
+ Returns
+ -------
+ obj_array : numpy array of the object dtype
+ """
cdef:
Py_ssize_t i, j, n, k, tmp
ndarray[object, ndim=2] result
@@ -1140,7 +1157,7 @@ def to_object_array(list rows):
n = len(rows)
- k = 0
+ k = min_width
for i from 0 <= i < n:
tmp = len(rows[i])
if tmp > k:
diff --git a/pandas/tests/test_infer_and_convert.py b/pandas/tests/test_infer_and_convert.py
index 7558934c32bc8..68eac12e5ec4c 100644
--- a/pandas/tests/test_infer_and_convert.py
+++ b/pandas/tests/test_infer_and_convert.py
@@ -201,6 +201,23 @@ def test_to_object_array_tuples(self):
except ImportError:
pass
+ def test_to_object_array_width(self):
+ # see gh-13320
+ rows = [[1, 2, 3], [4, 5, 6]]
+
+ expected = np.array(rows, dtype=object)
+ out = lib.to_object_array(rows)
+ tm.assert_numpy_array_equal(out, expected)
+
+ expected = np.array(rows, dtype=object)
+ out = lib.to_object_array(rows, min_width=1)
+ tm.assert_numpy_array_equal(out, expected)
+
+ expected = np.array([[1, 2, 3, None, None],
+ [4, 5, 6, None, None]], dtype=object)
+ out = lib.to_object_array(rows, min_width=5)
+ tm.assert_numpy_array_equal(out, expected)
+
def test_object(self):
# GH 7431
| Fixes bug in which the Python parser failed to detect trailing `NaN` values in rows
| https://api.github.com/repos/pandas-dev/pandas/pulls/13320 | 2016-05-29T22:35:20Z | 2016-06-01T11:10:08Z | null | 2016-06-01T11:18:39Z |
TST: Parser tests refactoring | diff --git a/pandas/io/tests/parser/c_parser_only.py b/pandas/io/tests/parser/c_parser_only.py
index aeee77bb02e98..9dde669c9d39d 100644
--- a/pandas/io/tests/parser/c_parser_only.py
+++ b/pandas/io/tests/parser/c_parser_only.py
@@ -419,15 +419,6 @@ def test_tokenize_CR_with_quoting(self):
expected = self.read_csv(StringIO(data.replace('\r', '\n')))
tm.assert_frame_equal(result, expected)
- def test_raise_on_no_columns(self):
- # single newline
- data = "\n"
- self.assertRaises(ValueError, self.read_csv, StringIO(data))
-
- # test with more than a single newline
- data = "\n\n\n"
- self.assertRaises(ValueError, self.read_csv, StringIO(data))
-
def test_grow_boundary_at_cap(self):
# See gh-12494
#
diff --git a/pandas/io/tests/parser/common.py b/pandas/io/tests/parser/common.py
index 14f4de853e118..2e3c102948cfa 100644
--- a/pandas/io/tests/parser/common.py
+++ b/pandas/io/tests/parser/common.py
@@ -1323,3 +1323,12 @@ def test_inf_parsing(self):
# TODO: remove condition when 'na_filter' is supported for Python
df = self.read_csv(StringIO(data), index_col=0, na_filter=False)
tm.assert_almost_equal(df['A'].values, expected.values)
+
+ def test_raise_on_no_columns(self):
+ # single newline
+ data = "\n"
+ self.assertRaises(EmptyDataError, self.read_csv, StringIO(data))
+
+ # test with more than a single newline
+ data = "\n\n\n"
+ self.assertRaises(EmptyDataError, self.read_csv, StringIO(data))
diff --git a/pandas/io/tests/parser/na_values.py b/pandas/io/tests/parser/na_values.py
index c34549835cb46..5916d8d347c8b 100644
--- a/pandas/io/tests/parser/na_values.py
+++ b/pandas/io/tests/parser/na_values.py
@@ -250,117 +250,3 @@ def test_na_values_keep_default(self):
'Three': ['None', 'two', 'None', 'nan', 'five', '',
'seven']})
tm.assert_frame_equal(xp.reindex(columns=df.columns), df)
-
- def test_skiprow_with_newline(self):
- # see gh-12775 and gh-10911
- data = """id,text,num_lines
-1,"line 11
-line 12",2
-2,"line 21
-line 22",2
-3,"line 31",1"""
- expected = [[2, 'line 21\nline 22', 2],
- [3, 'line 31', 1]]
- expected = DataFrame(expected, columns=[
- 'id', 'text', 'num_lines'])
- df = self.read_csv(StringIO(data), skiprows=[1])
- tm.assert_frame_equal(df, expected)
-
- data = ('a,b,c\n~a\n b~,~e\n d~,'
- '~f\n f~\n1,2,~12\n 13\n 14~')
- expected = [['a\n b', 'e\n d', 'f\n f']]
- expected = DataFrame(expected, columns=[
- 'a', 'b', 'c'])
- df = self.read_csv(StringIO(data),
- quotechar="~",
- skiprows=[2])
- tm.assert_frame_equal(df, expected)
-
- data = ('Text,url\n~example\n '
- 'sentence\n one~,url1\n~'
- 'example\n sentence\n two~,url2\n~'
- 'example\n sentence\n three~,url3')
- expected = [['example\n sentence\n two', 'url2']]
- expected = DataFrame(expected, columns=[
- 'Text', 'url'])
- df = self.read_csv(StringIO(data),
- quotechar="~",
- skiprows=[1, 3])
- tm.assert_frame_equal(df, expected)
-
- def test_skiprow_with_quote(self):
- # see gh-12775 and gh-10911
- data = """id,text,num_lines
-1,"line '11' line 12",2
-2,"line '21' line 22",2
-3,"line '31' line 32",1"""
- expected = [[2, "line '21' line 22", 2],
- [3, "line '31' line 32", 1]]
- expected = DataFrame(expected, columns=[
- 'id', 'text', 'num_lines'])
- df = self.read_csv(StringIO(data), skiprows=[1])
- tm.assert_frame_equal(df, expected)
-
- def test_skiprow_with_newline_and_quote(self):
- # see gh-12775 and gh-10911
- data = """id,text,num_lines
-1,"line \n'11' line 12",2
-2,"line \n'21' line 22",2
-3,"line \n'31' line 32",1"""
- expected = [[2, "line \n'21' line 22", 2],
- [3, "line \n'31' line 32", 1]]
- expected = DataFrame(expected, columns=[
- 'id', 'text', 'num_lines'])
- df = self.read_csv(StringIO(data), skiprows=[1])
- tm.assert_frame_equal(df, expected)
-
- data = """id,text,num_lines
-1,"line '11\n' line 12",2
-2,"line '21\n' line 22",2
-3,"line '31\n' line 32",1"""
- expected = [[2, "line '21\n' line 22", 2],
- [3, "line '31\n' line 32", 1]]
- expected = DataFrame(expected, columns=[
- 'id', 'text', 'num_lines'])
- df = self.read_csv(StringIO(data), skiprows=[1])
- tm.assert_frame_equal(df, expected)
-
- data = """id,text,num_lines
-1,"line '11\n' \r\tline 12",2
-2,"line '21\n' \r\tline 22",2
-3,"line '31\n' \r\tline 32",1"""
- expected = [[2, "line '21\n' \r\tline 22", 2],
- [3, "line '31\n' \r\tline 32", 1]]
- expected = DataFrame(expected, columns=[
- 'id', 'text', 'num_lines'])
- df = self.read_csv(StringIO(data), skiprows=[1])
- tm.assert_frame_equal(df, expected)
-
- def test_skiprows_lineterminator(self):
- # see gh-9079
- data = '\n'.join(['SMOSMANIA ThetaProbe-ML2X ',
- '2007/01/01 01:00 0.2140 U M ',
- '2007/01/01 02:00 0.2141 M O ',
- '2007/01/01 04:00 0.2142 D M '])
- expected = DataFrame([['2007/01/01', '01:00', 0.2140, 'U', 'M'],
- ['2007/01/01', '02:00', 0.2141, 'M', 'O'],
- ['2007/01/01', '04:00', 0.2142, 'D', 'M']],
- columns=['date', 'time', 'var', 'flag',
- 'oflag'])
-
- # test with default line terminators "LF" and "CRLF"
- df = self.read_csv(StringIO(data), skiprows=1, delim_whitespace=True,
- names=['date', 'time', 'var', 'flag', 'oflag'])
- tm.assert_frame_equal(df, expected)
-
- df = self.read_csv(StringIO(data.replace('\n', '\r\n')),
- skiprows=1, delim_whitespace=True,
- names=['date', 'time', 'var', 'flag', 'oflag'])
- tm.assert_frame_equal(df, expected)
-
- # "CR" is not respected with the Python parser yet
- if self.engine == 'c':
- df = self.read_csv(StringIO(data.replace('\n', '\r')),
- skiprows=1, delim_whitespace=True,
- names=['date', 'time', 'var', 'flag', 'oflag'])
- tm.assert_frame_equal(df, expected)
diff --git a/pandas/io/tests/parser/skiprows.py b/pandas/io/tests/parser/skiprows.py
index 3e585a9a623c9..c9f50dec6c01e 100644
--- a/pandas/io/tests/parser/skiprows.py
+++ b/pandas/io/tests/parser/skiprows.py
@@ -76,3 +76,117 @@ def test_skiprows_blank(self):
datetime(2000, 1, 3)])
expected.index.name = 0
tm.assert_frame_equal(data, expected)
+
+ def test_skiprow_with_newline(self):
+ # see gh-12775 and gh-10911
+ data = """id,text,num_lines
+1,"line 11
+line 12",2
+2,"line 21
+line 22",2
+3,"line 31",1"""
+ expected = [[2, 'line 21\nline 22', 2],
+ [3, 'line 31', 1]]
+ expected = DataFrame(expected, columns=[
+ 'id', 'text', 'num_lines'])
+ df = self.read_csv(StringIO(data), skiprows=[1])
+ tm.assert_frame_equal(df, expected)
+
+ data = ('a,b,c\n~a\n b~,~e\n d~,'
+ '~f\n f~\n1,2,~12\n 13\n 14~')
+ expected = [['a\n b', 'e\n d', 'f\n f']]
+ expected = DataFrame(expected, columns=[
+ 'a', 'b', 'c'])
+ df = self.read_csv(StringIO(data),
+ quotechar="~",
+ skiprows=[2])
+ tm.assert_frame_equal(df, expected)
+
+ data = ('Text,url\n~example\n '
+ 'sentence\n one~,url1\n~'
+ 'example\n sentence\n two~,url2\n~'
+ 'example\n sentence\n three~,url3')
+ expected = [['example\n sentence\n two', 'url2']]
+ expected = DataFrame(expected, columns=[
+ 'Text', 'url'])
+ df = self.read_csv(StringIO(data),
+ quotechar="~",
+ skiprows=[1, 3])
+ tm.assert_frame_equal(df, expected)
+
+ def test_skiprow_with_quote(self):
+ # see gh-12775 and gh-10911
+ data = """id,text,num_lines
+1,"line '11' line 12",2
+2,"line '21' line 22",2
+3,"line '31' line 32",1"""
+ expected = [[2, "line '21' line 22", 2],
+ [3, "line '31' line 32", 1]]
+ expected = DataFrame(expected, columns=[
+ 'id', 'text', 'num_lines'])
+ df = self.read_csv(StringIO(data), skiprows=[1])
+ tm.assert_frame_equal(df, expected)
+
+ def test_skiprow_with_newline_and_quote(self):
+ # see gh-12775 and gh-10911
+ data = """id,text,num_lines
+1,"line \n'11' line 12",2
+2,"line \n'21' line 22",2
+3,"line \n'31' line 32",1"""
+ expected = [[2, "line \n'21' line 22", 2],
+ [3, "line \n'31' line 32", 1]]
+ expected = DataFrame(expected, columns=[
+ 'id', 'text', 'num_lines'])
+ df = self.read_csv(StringIO(data), skiprows=[1])
+ tm.assert_frame_equal(df, expected)
+
+ data = """id,text,num_lines
+1,"line '11\n' line 12",2
+2,"line '21\n' line 22",2
+3,"line '31\n' line 32",1"""
+ expected = [[2, "line '21\n' line 22", 2],
+ [3, "line '31\n' line 32", 1]]
+ expected = DataFrame(expected, columns=[
+ 'id', 'text', 'num_lines'])
+ df = self.read_csv(StringIO(data), skiprows=[1])
+ tm.assert_frame_equal(df, expected)
+
+ data = """id,text,num_lines
+1,"line '11\n' \r\tline 12",2
+2,"line '21\n' \r\tline 22",2
+3,"line '31\n' \r\tline 32",1"""
+ expected = [[2, "line '21\n' \r\tline 22", 2],
+ [3, "line '31\n' \r\tline 32", 1]]
+ expected = DataFrame(expected, columns=[
+ 'id', 'text', 'num_lines'])
+ df = self.read_csv(StringIO(data), skiprows=[1])
+ tm.assert_frame_equal(df, expected)
+
+ def test_skiprows_lineterminator(self):
+ # see gh-9079
+ data = '\n'.join(['SMOSMANIA ThetaProbe-ML2X ',
+ '2007/01/01 01:00 0.2140 U M ',
+ '2007/01/01 02:00 0.2141 M O ',
+ '2007/01/01 04:00 0.2142 D M '])
+ expected = DataFrame([['2007/01/01', '01:00', 0.2140, 'U', 'M'],
+ ['2007/01/01', '02:00', 0.2141, 'M', 'O'],
+ ['2007/01/01', '04:00', 0.2142, 'D', 'M']],
+ columns=['date', 'time', 'var', 'flag',
+ 'oflag'])
+
+ # test with default line terminators "LF" and "CRLF"
+ df = self.read_csv(StringIO(data), skiprows=1, delim_whitespace=True,
+ names=['date', 'time', 'var', 'flag', 'oflag'])
+ tm.assert_frame_equal(df, expected)
+
+ df = self.read_csv(StringIO(data.replace('\n', '\r\n')),
+ skiprows=1, delim_whitespace=True,
+ names=['date', 'time', 'var', 'flag', 'oflag'])
+ tm.assert_frame_equal(df, expected)
+
+ # "CR" is not respected with the Python parser yet
+ if self.engine == 'c':
+ df = self.read_csv(StringIO(data.replace('\n', '\r')),
+ skiprows=1, delim_whitespace=True,
+ names=['date', 'time', 'var', 'flag', 'oflag'])
+ tm.assert_frame_equal(df, expected)
| 1) Moved no columns test from CParser-only to `common.py`
2) Moved erroneous placed skiprows tests into their proper place
| https://api.github.com/repos/pandas-dev/pandas/pulls/13319 | 2016-05-29T22:33:11Z | 2016-05-30T14:21:42Z | null | 2016-05-30T21:24:56Z |
add compression support for 'read_pickle' and 'to_pickle' | diff --git a/doc/source/io.rst b/doc/source/io.rst
index b36ae8c2ed450..1b19599177c9a 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -2926,6 +2926,45 @@ any pickled pandas object (or any other pickled object) from file:
These methods were previously ``pd.save`` and ``pd.load``, prior to 0.12.0, and are now deprecated.
+.. _io.pickle.compression:
+
+Read/Write compressed pickle files
+''''''''''''''
+
+.. versionadded:: 0.20.0
+
+:func:`read_pickle`, :meth:`DataFame.to_pickle` and :meth:`Series.to_pickle` can read
+and write compressed pickle files. Compression types of ``gzip``, ``bz2``, ``xz`` supports
+both read and write. ``zip`` file supports read only and must contain only one data file
+to be read in.
+Compression type can be an explicitely parameter or be inferred from the file extension.
+If 'infer', then use ``gzip``, ``bz2``, ``zip``, or ``xz`` if filename ends in ``'.gz'``, ``'.bz2'``, ``'.zip'``, or
+``'.xz'``, respectively.
+
+.. ipython:: python
+
+ df = pd.DataFrame({
+ 'A': np.random.randn(1000),
+ 'B': np.random.randn(1000),
+ 'C': np.random.randn(1000)})
+ df.to_pickle("data.pkl.compress", compression="gzip") # explicit compression type
+ df.to_pickle("data.pkl.xz", compression="infer") # infer compression type from extension
+ df.to_pickle("data.pkl.gz") # default, using "infer"
+ df["A"].to_pickle("s1.pkl.bz2")
+
+ df = pd.read_pickle("data.pkl.compress", compression="gzip")
+ df = pd.read_pickle("data.pkl.xz", compression="infer")
+ df = pd.read_pickle("data.pkl.gz")
+ s = pd.read_pickle("s1.pkl.bz2")
+
+.. ipython:: python
+ :suppress:
+ import os
+ os.remove("data.pkl.compress")
+ os.remove("data.pkl.xz")
+ os.remove("data.pkl.gz")
+ os.remove("s1.pkl.bz2")
+
.. _io.msgpack:
msgpack
diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 54df7514a882d..d5c438e8c08d1 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -97,6 +97,40 @@ support for bz2 compression in the python 2 c-engine improved (:issue:`14874`).
df = pd.read_table(url, compression='bz2') # explicitly specify compression
df.head(2)
+.. _whatsnew_0200.enhancements.pickle_compression:
+
+Pickle file I/O now supports compression
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+:func:`read_pickle`, :meth:`DataFame.to_pickle` and :meth:`Series.to_pickle`
+can now read from and write to compressed pickle files. Compression methods
+can be an explicit parameter or be inferred from the file extension.
+See :ref:`Read/Write compressed pickle files <io.pickle.compression>`
+
+.. ipython:: python
+
+ df = pd.DataFrame({
+ 'A': np.random.randn(1000),
+ 'B': np.random.randn(1000),
+ 'C': np.random.randn(1000)})
+ df.to_pickle("data.pkl.compress", compression="gzip") # explicit compression type
+ df.to_pickle("data.pkl.xz", compression="infer") # infer compression type from extension
+ df.to_pickle("data.pkl.gz") # default, using "infer"
+ df["A"].to_pickle("s1.pkl.bz2")
+
+ df = pd.read_pickle("data.pkl.compress", compression="gzip")
+ df = pd.read_pickle("data.pkl.xz", compression="infer")
+ df = pd.read_pickle("data.pkl.gz")
+ s = pd.read_pickle("s1.pkl.bz2")
+
+.. ipython:: python
+ :suppress:
+ import os
+ os.remove("data.pkl.compress")
+ os.remove("data.pkl.xz")
+ os.remove("data.pkl.gz")
+ os.remove("s1.pkl.bz2")
+
.. _whatsnew_0200.enhancements.uint64_support:
UInt64 Support Improved
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 127aac970fbc1..61a1514dd997a 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1278,7 +1278,7 @@ def to_sql(self, name, con, flavor=None, schema=None, if_exists='fail',
if_exists=if_exists, index=index, index_label=index_label,
chunksize=chunksize, dtype=dtype)
- def to_pickle(self, path):
+ def to_pickle(self, path, compression='infer'):
"""
Pickle (serialize) object to input file path.
@@ -1286,9 +1286,13 @@ def to_pickle(self, path):
----------
path : string
File path
+ compression : {'infer', 'gzip', 'bz2', 'xz', None}, default 'infer'
+ a string representing the compression to use in the output file
+
+ .. versionadded:: 0.20.0
"""
from pandas.io.pickle import to_pickle
- return to_pickle(self, path)
+ return to_pickle(self, path, compression=compression)
def to_clipboard(self, excel=None, sep=None, **kwargs):
"""
diff --git a/pandas/io/common.py b/pandas/io/common.py
index 74c51b74ca18a..e42d218d7925f 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -305,7 +305,7 @@ def _infer_compression(filepath_or_buffer, compression):
def _get_handle(path_or_buf, mode, encoding=None, compression=None,
- memory_map=False):
+ memory_map=False, is_text=True):
"""
Get file handle for given path/buffer and mode.
@@ -320,7 +320,9 @@ def _get_handle(path_or_buf, mode, encoding=None, compression=None,
Supported compression protocols are gzip, bz2, zip, and xz
memory_map : boolean, default False
See parsers._parser_params for more information.
-
+ is_text : boolean, default True
+ whether file/buffer is in text format (csv, json, etc.), or in binary
+ mode (pickle, etc.)
Returns
-------
f : file-like
@@ -394,13 +396,17 @@ def _get_handle(path_or_buf, mode, encoding=None, compression=None,
elif encoding:
# Python 3 and encoding
f = open(path_or_buf, mode, encoding=encoding)
- else:
+ elif is_text:
# Python 3 and no explicit encoding
f = open(path_or_buf, mode, errors='replace')
+ else:
+ # Python 3 and binary mode
+ f = open(path_or_buf, mode)
handles.append(f)
# in Python 3, convert BytesIO or fileobjects passed with an encoding
- if compat.PY3 and (compression or isinstance(f, need_text_wrapping)):
+ if compat.PY3 and is_text and\
+ (compression or isinstance(f, need_text_wrapping)):
from io import TextIOWrapper
f = TextIOWrapper(f, encoding=encoding)
handles.append(f)
diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py
index 2358c296f782e..969a2a51cb15d 100644
--- a/pandas/io/pickle.py
+++ b/pandas/io/pickle.py
@@ -4,9 +4,10 @@
from numpy.lib.format import read_array, write_array
from pandas.compat import BytesIO, cPickle as pkl, pickle_compat as pc, PY3
from pandas.types.common import is_datetime64_dtype, _NS_DTYPE
+from pandas.io.common import _get_handle, _infer_compression
-def to_pickle(obj, path):
+def to_pickle(obj, path, compression='infer'):
"""
Pickle (serialize) object to input file path
@@ -15,12 +16,23 @@ def to_pickle(obj, path):
obj : any object
path : string
File path
+ compression : {'infer', 'gzip', 'bz2', 'xz', None}, default 'infer'
+ a string representing the compression to use in the output file
+
+ .. versionadded:: 0.20.0
"""
- with open(path, 'wb') as f:
+ inferred_compression = _infer_compression(path, compression)
+ f, fh = _get_handle(path, 'wb',
+ compression=inferred_compression,
+ is_text=False)
+ try:
pkl.dump(obj, f, protocol=pkl.HIGHEST_PROTOCOL)
+ finally:
+ for _f in fh:
+ _f.close()
-def read_pickle(path):
+def read_pickle(path, compression='infer'):
"""
Load pickled pandas object (or any other pickled object) from the specified
file path
@@ -32,12 +44,32 @@ def read_pickle(path):
----------
path : string
File path
+ compression : {'infer', 'gzip', 'bz2', 'xz', 'zip', None}, default 'infer'
+ For on-the-fly decompression of on-disk data. If 'infer', then use
+ gzip, bz2, xz or zip if path is a string ending in '.gz', '.bz2', 'xz',
+ or 'zip' respectively, and no decompression otherwise.
+ Set to None for no decompression.
+
+ .. versionadded:: 0.20.0
Returns
-------
unpickled : type of object stored in file
"""
+ inferred_compression = _infer_compression(path, compression)
+
+ def read_wrapper(func):
+ # wrapper file handle open/close operation
+ f, fh = _get_handle(path, 'rb',
+ compression=inferred_compression,
+ is_text=False)
+ try:
+ return func(f)
+ finally:
+ for _f in fh:
+ _f.close()
+
def try_read(path, encoding=None):
# try with cPickle
# try with current pickle, if we have a Type Error then
@@ -48,19 +80,16 @@ def try_read(path, encoding=None):
# cpickle
# GH 6899
try:
- with open(path, 'rb') as fh:
- return pkl.load(fh)
+ return read_wrapper(lambda f: pkl.load(f))
except Exception:
# reg/patched pickle
try:
- with open(path, 'rb') as fh:
- return pc.load(fh, encoding=encoding, compat=False)
-
+ return read_wrapper(
+ lambda f: pc.load(f, encoding=encoding, compat=False))
# compat pickle
except:
- with open(path, 'rb') as fh:
- return pc.load(fh, encoding=encoding, compat=True)
-
+ return read_wrapper(
+ lambda f: pc.load(f, encoding=encoding, compat=True))
try:
return try_read(path)
except:
@@ -68,6 +97,7 @@ def try_read(path, encoding=None):
return try_read(path, encoding='latin1')
raise
+
# compat with sparse pickle / unpickle
diff --git a/pandas/tests/io/test_pickle.py b/pandas/tests/io/test_pickle.py
index c736ec829808a..2fffc3c39ec26 100644
--- a/pandas/tests/io/test_pickle.py
+++ b/pandas/tests/io/test_pickle.py
@@ -15,15 +15,14 @@
import pytest
import os
-
from distutils.version import LooseVersion
-
import pandas as pd
from pandas import Index
from pandas.compat import is_platform_little_endian
import pandas
import pandas.util.testing as tm
from pandas.tseries.offsets import Day, MonthEnd
+import shutil
@pytest.fixture(scope='module')
@@ -302,3 +301,196 @@ def test_pickle_v0_15_2():
# with open(pickle_path, 'wb') as f: pickle.dump(cat, f)
#
tm.assert_categorical_equal(cat, pd.read_pickle(pickle_path))
+
+
+# ---------------------
+# test pickle compression
+# ---------------------
+_compression_to_extension = {
+ None: ".none",
+ 'gzip': '.gz',
+ 'bz2': '.bz2',
+ 'zip': '.zip',
+ 'xz': '.xz',
+}
+
+
+def get_random_path():
+ return u'__%s__.pickle' % tm.rands(10)
+
+
+def compress_file(src_path, dest_path, compression):
+ if compression is None:
+ shutil.copyfile(src_path, dest_path)
+ return
+
+ if compression == 'gzip':
+ import gzip
+ f = gzip.open(dest_path, "w")
+ elif compression == 'bz2':
+ import bz2
+ f = bz2.BZ2File(dest_path, "w")
+ elif compression == 'zip':
+ import zipfile
+ zip_file = zipfile.ZipFile(dest_path, "w",
+ compression=zipfile.ZIP_DEFLATED)
+ zip_file.write(src_path, os.path.basename(src_path))
+ elif compression == 'xz':
+ lzma = pandas.compat.import_lzma()
+ f = lzma.LZMAFile(dest_path, "w")
+ else:
+ msg = 'Unrecognized compression type: {}'.format(compression)
+ raise ValueError(msg)
+
+ if compression != "zip":
+ f.write(open(src_path, "rb").read())
+ f.close()
+
+
+def decompress_file(src_path, dest_path, compression):
+ if compression is None:
+ shutil.copyfile(src_path, dest_path)
+ return
+
+ if compression == 'gzip':
+ import gzip
+ f = gzip.open(src_path, "r")
+ elif compression == 'bz2':
+ import bz2
+ f = bz2.BZ2File(src_path, "r")
+ elif compression == 'zip':
+ import zipfile
+ zip_file = zipfile.ZipFile(src_path)
+ zip_names = zip_file.namelist()
+ if len(zip_names) == 1:
+ f = zip_file.open(zip_names.pop())
+ else:
+ raise ValueError('ZIP file {} error. Only one file per ZIP.'
+ .format(src_path))
+ elif compression == 'xz':
+ lzma = pandas.compat.import_lzma()
+ f = lzma.LZMAFile(src_path, "r")
+ else:
+ msg = 'Unrecognized compression type: {}'.format(compression)
+ raise ValueError(msg)
+
+ open(dest_path, "wb").write(f.read())
+ f.close()
+
+
[email protected]('compression', [None, 'gzip', 'bz2', 'xz'])
+def test_write_explicit(compression):
+ # issue 11666
+ if compression == 'xz':
+ tm._skip_if_no_lzma()
+
+ base = get_random_path()
+ path1 = base + ".compressed"
+ path2 = base + ".raw"
+
+ with tm.ensure_clean(path1) as p1, tm.ensure_clean(path2) as p2:
+ df = tm.makeDataFrame()
+
+ # write to compressed file
+ df.to_pickle(p1, compression=compression)
+
+ # decompress
+ decompress_file(p1, p2, compression=compression)
+
+ # read decompressed file
+ df2 = pd.read_pickle(p2, compression=None)
+
+ tm.assert_frame_equal(df, df2)
+
+
[email protected]('compression', ['', 'None', 'bad', '7z'])
+def test_write_explicit_bad(compression):
+ with tm.assertRaisesRegexp(ValueError,
+ "Unrecognized compression type"):
+ with tm.ensure_clean(get_random_path()) as path:
+ df = tm.makeDataFrame()
+ df.to_pickle(path, compression=compression)
+
+
[email protected]('ext', ['', '.gz', '.bz2', '.xz', '.no_compress'])
+def test_write_infer(ext):
+ if ext == '.xz':
+ tm._skip_if_no_lzma()
+
+ base = get_random_path()
+ path1 = base + ext
+ path2 = base + ".raw"
+ compression = None
+ for c in _compression_to_extension:
+ if _compression_to_extension[c] == ext:
+ compression = c
+ break
+
+ with tm.ensure_clean(path1) as p1, tm.ensure_clean(path2) as p2:
+ df = tm.makeDataFrame()
+
+ # write to compressed file by inferred compression method
+ df.to_pickle(p1)
+
+ # decompress
+ decompress_file(p1, p2, compression=compression)
+
+ # read decompressed file
+ df2 = pd.read_pickle(p2, compression=None)
+
+ tm.assert_frame_equal(df, df2)
+
+
[email protected]('compression', [None, 'gzip', 'bz2', 'xz', "zip"])
+def test_read_explicit(compression):
+ # issue 11666
+ if compression == 'xz':
+ tm._skip_if_no_lzma()
+
+ base = get_random_path()
+ path1 = base + ".raw"
+ path2 = base + ".compressed"
+
+ with tm.ensure_clean(path1) as p1, tm.ensure_clean(path2) as p2:
+ df = tm.makeDataFrame()
+
+ # write to uncompressed file
+ df.to_pickle(p1, compression=None)
+
+ # compress
+ compress_file(p1, p2, compression=compression)
+
+ # read compressed file
+ df2 = pd.read_pickle(p2, compression=compression)
+
+ tm.assert_frame_equal(df, df2)
+
+
[email protected]('ext', ['', '.gz', '.bz2', '.xz', '.zip',
+ '.no_compress'])
+def test_read_infer(ext):
+ if ext == '.xz':
+ tm._skip_if_no_lzma()
+
+ base = get_random_path()
+ path1 = base + ".raw"
+ path2 = base + ext
+ compression = None
+ for c in _compression_to_extension:
+ if _compression_to_extension[c] == ext:
+ compression = c
+ break
+
+ with tm.ensure_clean(path1) as p1, tm.ensure_clean(path2) as p2:
+ df = tm.makeDataFrame()
+
+ # write to uncompressed file
+ df.to_pickle(p1, compression=None)
+
+ # compress
+ compress_file(p1, p2, compression=compression)
+
+ # read compressed file by inferred compression method
+ df2 = pd.read_pickle(p2)
+
+ tm.assert_frame_equal(df, df2)
| closes #11666
My code is not pythonic enough, maybe need some refactor. Any comment is welcome.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13317 | 2016-05-29T13:19:02Z | 2017-03-09T15:25:04Z | null | 2017-03-09T16:24:19Z |
BUG: Groupby.nth includes group key inconsistently #12839 | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index dfb5ebc9379b1..ca73351475154 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -313,7 +313,7 @@ Bug Fixes
- Bug in ``groupby`` where ``apply`` returns different result depending on whether first result is ``None`` or not (:issue:`12824`)
-
+- Bug in ``groupby(..).nth()`` where the group key is included inconsistently (:issue:`12839`)
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index bea62e98e4a2a..0c3b53aa1419c 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -95,7 +95,7 @@ def _groupby_function(name, alias, npfunc, numeric_only=True,
@Appender(_doc_template)
@Appender(_local_template)
def f(self):
- self._set_selection_from_grouper()
+ self._set_group_selection()
try:
return self._cython_agg_general(alias, numeric_only=numeric_only)
except AssertionError as e:
@@ -457,8 +457,21 @@ def _selected_obj(self):
else:
return self.obj[self._selection]
- def _set_selection_from_grouper(self):
- """ we may need create a selection if we have non-level groupers """
+ def _reset_group_selection(self):
+ """
+ Clear group based selection. Used for methods needing to return info on
+ each group regardless of whether a group selection was previously set.
+ """
+ if self._group_selection is not None:
+ self._group_selection = None
+ # GH12839 clear cached selection too when changing group selection
+ self._reset_cache('_selected_obj')
+
+ def _set_group_selection(self):
+ """
+ Create group based selection. Used when selection is not passed
+ directly but instead via a grouper.
+ """
grp = self.grouper
if self.as_index and getattr(grp, 'groupings', None) is not None and \
self.obj.ndim > 1:
@@ -468,6 +481,8 @@ def _set_selection_from_grouper(self):
if len(groupers):
self._group_selection = ax.difference(Index(groupers)).tolist()
+ # GH12839 clear selected obj cache when group selection changes
+ self._reset_cache('_selected_obj')
def _set_result_index_ordered(self, result):
# set the result index on the passed values object and
@@ -511,7 +526,7 @@ def _make_wrapper(self, name):
# need to setup the selection
# as are not passed directly but in the grouper
- self._set_selection_from_grouper()
+ self._set_group_selection()
f = getattr(self._selected_obj, name)
if not isinstance(f, types.MethodType):
@@ -979,7 +994,7 @@ def mean(self, *args, **kwargs):
except GroupByError:
raise
except Exception: # pragma: no cover
- self._set_selection_from_grouper()
+ self._set_group_selection()
f = lambda x: x.mean(axis=self.axis)
return self._python_agg_general(f)
@@ -997,7 +1012,7 @@ def median(self):
raise
except Exception: # pragma: no cover
- self._set_selection_from_grouper()
+ self._set_group_selection()
def f(x):
if isinstance(x, np.ndarray):
@@ -1040,7 +1055,7 @@ def var(self, ddof=1, *args, **kwargs):
if ddof == 1:
return self._cython_agg_general('var')
else:
- self._set_selection_from_grouper()
+ self._set_group_selection()
f = lambda x: x.var(ddof=ddof)
return self._python_agg_general(f)
@@ -1216,7 +1231,7 @@ def nth(self, n, dropna=None):
raise TypeError("n needs to be an int or a list/set/tuple of ints")
nth_values = np.array(nth_values, dtype=np.intp)
- self._set_selection_from_grouper()
+ self._set_group_selection()
if not dropna:
mask = np.in1d(self._cumcount_array(), nth_values) | \
@@ -1324,7 +1339,7 @@ def cumcount(self, ascending=True):
dtype: int64
"""
- self._set_selection_from_grouper()
+ self._set_group_selection()
index = self._selected_obj.index
cumcounts = self._cumcount_array(ascending=ascending)
@@ -1402,6 +1417,7 @@ def head(self, n=5):
0 1 2
2 5 6
"""
+ self._reset_group_selection()
mask = self._cumcount_array() < n
return self._selected_obj[mask]
@@ -1428,6 +1444,7 @@ def tail(self, n=5):
0 a 1
2 b 1
"""
+ self._reset_group_selection()
mask = self._cumcount_array(ascending=False) < n
return self._selected_obj[mask]
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index 6659e6b106a67..139f0472f7480 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -354,6 +354,35 @@ def test_nth_multi_index_as_expected(self):
names=['A', 'B']))
assert_frame_equal(result, expected)
+ def test_group_selection_cache(self):
+ # GH 12839 nth, head, and tail should return same result consistently
+ df = DataFrame([[1, 2], [1, 4], [5, 6]], columns=['A', 'B'])
+ expected = df.iloc[[0, 2]].set_index('A')
+
+ g = df.groupby('A')
+ result1 = g.head(n=2)
+ result2 = g.nth(0)
+ assert_frame_equal(result1, df)
+ assert_frame_equal(result2, expected)
+
+ g = df.groupby('A')
+ result1 = g.tail(n=2)
+ result2 = g.nth(0)
+ assert_frame_equal(result1, df)
+ assert_frame_equal(result2, expected)
+
+ g = df.groupby('A')
+ result1 = g.nth(0)
+ result2 = g.head(n=2)
+ assert_frame_equal(result1, expected)
+ assert_frame_equal(result2, df)
+
+ g = df.groupby('A')
+ result1 = g.nth(0)
+ result2 = g.tail(n=2)
+ assert_frame_equal(result1, expected)
+ assert_frame_equal(result2, df)
+
def test_grouper_index_types(self):
# related GH5375
# groupby misbehaving when using a Floatlike index
@@ -6107,7 +6136,7 @@ def test_cython_transform(self):
# bit a of hack to make sure the cythonized shift
# is equivalent to pre 0.17.1 behavior
if op == 'shift':
- gb._set_selection_from_grouper()
+ gb._set_group_selection()
for (op, args), targop in ops:
if op != 'shift' and 'int' not in gb_target:
| - [x] closes #12839
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
When the group selection changes, the cache for `_selected_obj` needs to be reset so that `nth`, `head`, and `tail` can return consistent results.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13316 | 2016-05-29T10:46:11Z | 2016-07-06T21:47:54Z | null | 2016-07-07T03:12:27Z |
ENH: Add SemiMonthEnd and SemiMonthBegin offsets #1543 | diff --git a/asv_bench/benchmarks/timeseries.py b/asv_bench/benchmarks/timeseries.py
index bdf193cd1f3d3..2b0d098670858 100644
--- a/asv_bench/benchmarks/timeseries.py
+++ b/asv_bench/benchmarks/timeseries.py
@@ -1155,3 +1155,63 @@ def setup(self):
def time_timeseries_year_incr(self):
(self.date + self.year)
+
+
+class timeseries_semi_month_offset(object):
+ goal_time = 0.2
+
+ def setup(self):
+ self.N = 100000
+ self.rng = date_range(start='1/1/2000', periods=self.N, freq='T')
+ # date is not on an offset which will be slowest case
+ self.date = dt.datetime(2011, 1, 2)
+ self.semi_month_end = pd.offsets.SemiMonthEnd()
+ self.semi_month_begin = pd.offsets.SemiMonthBegin()
+
+ def time_semi_month_end_apply(self):
+ self.semi_month_end.apply(self.date)
+
+ def time_semi_month_end_incr(self):
+ self.date + self.semi_month_end
+
+ def time_semi_month_end_incr_n(self):
+ self.date + 10 * self.semi_month_end
+
+ def time_semi_month_end_decr(self):
+ self.date - self.semi_month_end
+
+ def time_semi_month_end_decr_n(self):
+ self.date - 10 * self.semi_month_end
+
+ def time_semi_month_end_apply_index(self):
+ self.semi_month_end.apply_index(self.rng)
+
+ def time_semi_month_end_incr_rng(self):
+ self.rng + self.semi_month_end
+
+ def time_semi_month_end_decr_rng(self):
+ self.rng - self.semi_month_end
+
+ def time_semi_month_begin_apply(self):
+ self.semi_month_begin.apply(self.date)
+
+ def time_semi_month_begin_incr(self):
+ self.date + self.semi_month_begin
+
+ def time_semi_month_begin_incr_n(self):
+ self.date + 10 * self.semi_month_begin
+
+ def time_semi_month_begin_decr(self):
+ self.date - self.semi_month_begin
+
+ def time_semi_month_begin_decr_n(self):
+ self.date - 10 * self.semi_month_begin
+
+ def time_semi_month_begin_apply_index(self):
+ self.semi_month_begin.apply_index(self.rng)
+
+ def time_semi_month_begin_incr_rng(self):
+ self.rng + self.semi_month_begin
+
+ def time_semi_month_begin_decr_rng(self):
+ self.rng - self.semi_month_begin
diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index 62601821488d3..7e832af14c051 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -589,6 +589,8 @@ frequency increment. Specific offset logic like "month", "business day", or
BMonthBegin, "business month begin"
CBMonthEnd, "custom business month end"
CBMonthBegin, "custom business month begin"
+ SemiMonthEnd, "15th (or other day_of_month) and calendar month end"
+ SemiMonthBegin, "15th (or other day_of_month) and calendar month begin"
QuarterEnd, "calendar quarter end"
QuarterBegin, "calendar quarter begin"
BQuarterEnd, "business quarter end"
@@ -967,9 +969,11 @@ frequencies. We will refer to these aliases as *offset aliases*
"D", "calendar day frequency"
"W", "weekly frequency"
"M", "month end frequency"
+ "SM", "semi-month end frequency (15th and end of month)"
"BM", "business month end frequency"
"CBM", "custom business month end frequency"
"MS", "month start frequency"
+ "SMS", "semi-month start frequency (1st and 15th)"
"BMS", "business month start frequency"
"CBMS", "custom business month start frequency"
"Q", "quarter end frequency"
diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 105194e504f45..975d55fa2b86a 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -51,6 +51,43 @@ New behaviour:
In [2]: pd.read_csv(StringIO(data), names=names)
+.. _whatsnew_0182.enhancements.semi_month_offsets:
+
+Semi-Month Offsets
+^^^^^^^^^^^^^^^^^^
+
+Pandas has gained new frequency offsets, ``SemiMonthEnd`` ('SM') and ``SemiMonthBegin`` ('SMS').
+These provide date offsets anchored (by default) to the 15th and end of month, and 15th and 1st of month respectively.
+(:issue:`1543`)
+
+.. ipython:: python
+
+ from pandas.tseries.offsets import SemiMonthEnd, SemiMonthBegin
+
+SemiMonthEnd:
+
+.. ipython:: python
+
+ Timestamp('2016-01-01') + SemiMonthEnd()
+
+ pd.date_range('2015-01-01', freq='SM', periods=4)
+
+SemiMonthBegin:
+
+.. ipython:: python
+
+ Timestamp('2016-01-01') + SemiMonthBegin()
+
+ pd.date_range('2015-01-01', freq='SMS', periods=4)
+
+Using the anchoring suffix, you can also specify the day of month to use instead of the 15th.
+
+.. ipython:: python
+
+ pd.date_range('2015-01-01', freq='SMS-16', periods=4)
+
+ pd.date_range('2015-01-01', freq='SM-14', periods=4)
+
.. _whatsnew_0182.enhancements.other:
Other enhancements
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index 7d3255add4f64..f4b75ddd72126 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -4,7 +4,8 @@
import numpy as np
from pandas.tseries.tools import to_datetime, normalize_date
-from pandas.core.common import ABCSeries, ABCDatetimeIndex, ABCPeriod
+from pandas.core.common import (ABCSeries, ABCDatetimeIndex, ABCPeriod,
+ AbstractMethodError)
# import after tools, dateutil check
from dateutil.relativedelta import relativedelta, weekday
@@ -18,6 +19,7 @@
__all__ = ['Day', 'BusinessDay', 'BDay', 'CustomBusinessDay', 'CDay',
'CBMonthEnd', 'CBMonthBegin',
'MonthBegin', 'BMonthBegin', 'MonthEnd', 'BMonthEnd',
+ 'SemiMonthEnd', 'SemiMonthBegin',
'BusinessHour', 'CustomBusinessHour',
'YearBegin', 'BYearBegin', 'YearEnd', 'BYearEnd',
'QuarterBegin', 'BQuarterBegin', 'QuarterEnd', 'BQuarterEnd',
@@ -1160,6 +1162,214 @@ def onOffset(self, dt):
_prefix = 'MS'
+class SemiMonthOffset(DateOffset):
+ _adjust_dst = True
+ _default_day_of_month = 15
+ _min_day_of_month = 2
+
+ def __init__(self, n=1, day_of_month=None, normalize=False, **kwds):
+ if day_of_month is None:
+ self.day_of_month = self._default_day_of_month
+ else:
+ self.day_of_month = int(day_of_month)
+ if not self._min_day_of_month <= self.day_of_month <= 27:
+ raise ValueError('day_of_month must be '
+ '{}<=day_of_month<=27, got {}'.format(
+ self._min_day_of_month, self.day_of_month))
+ self.n = int(n)
+ self.normalize = normalize
+ self.kwds = kwds
+ self.kwds['day_of_month'] = self.day_of_month
+
+ @classmethod
+ def _from_name(cls, suffix=None):
+ return cls(day_of_month=suffix)
+
+ @property
+ def rule_code(self):
+ suffix = '-{}'.format(self.day_of_month)
+ return self._prefix + suffix
+
+ @apply_wraps
+ def apply(self, other):
+ n = self.n
+ if not self.onOffset(other):
+ _, days_in_month = tslib.monthrange(other.year, other.month)
+ if 1 < other.day < self.day_of_month:
+ other += relativedelta(day=self.day_of_month)
+ if n > 0:
+ # rollforward so subtract 1
+ n -= 1
+ elif self.day_of_month < other.day < days_in_month:
+ other += relativedelta(day=self.day_of_month)
+ if n < 0:
+ # rollforward in the negative direction so add 1
+ n += 1
+ elif n == 0:
+ n = 1
+
+ return self._apply(n, other)
+
+ def _apply(self, n, other):
+ """Handle specific apply logic for child classes"""
+ raise AbstractMethodError(self)
+
+ @apply_index_wraps
+ def apply_index(self, i):
+ # determine how many days away from the 1st of the month we are
+ days_from_start = i.to_perioddelta('M').asi8
+ delta = Timedelta(days=self.day_of_month - 1).value
+
+ # get boolean array for each element before the day_of_month
+ before_day_of_month = days_from_start < delta
+
+ # get boolean array for each element after the day_of_month
+ after_day_of_month = days_from_start > delta
+
+ # determine the correct n for each date in i
+ roll = self._get_roll(i, before_day_of_month, after_day_of_month)
+
+ # isolate the time since it will be striped away one the next line
+ time = i.to_perioddelta('D')
+
+ # apply the correct number of months
+ i = (i.to_period('M') + (roll // 2)).to_timestamp()
+
+ # apply the correct day
+ i = self._apply_index_days(i, roll)
+
+ return i + time
+
+ def _get_roll(self, i, before_day_of_month, after_day_of_month):
+ """Return an array with the correct n for each date in i.
+
+ The roll array is based on the fact that i gets rolled back to
+ the first day of the month.
+ """
+ raise AbstractMethodError(self)
+
+ def _apply_index_days(self, i, roll):
+ """Apply the correct day for each date in i"""
+ raise AbstractMethodError(self)
+
+
+class SemiMonthEnd(SemiMonthOffset):
+ """
+ Two DateOffset's per month repeating on the last
+ day of the month and day_of_month.
+
+ .. versionadded:: 0.18.2
+
+ Parameters
+ ----------
+ n: int
+ normalize : bool, default False
+ day_of_month: int, {1, 3,...,27}, default 15
+ """
+ _prefix = 'SM'
+ _min_day_of_month = 1
+
+ def onOffset(self, dt):
+ if self.normalize and not _is_normalized(dt):
+ return False
+ _, days_in_month = tslib.monthrange(dt.year, dt.month)
+ return dt.day in (self.day_of_month, days_in_month)
+
+ def _apply(self, n, other):
+ # if other.day is not day_of_month move to day_of_month and update n
+ if other.day < self.day_of_month:
+ other += relativedelta(day=self.day_of_month)
+ if n > 0:
+ n -= 1
+ elif other.day > self.day_of_month:
+ other += relativedelta(day=self.day_of_month)
+ if n == 0:
+ n = 1
+ else:
+ n += 1
+
+ months = n // 2
+ day = 31 if n % 2 else self.day_of_month
+ return other + relativedelta(months=months, day=day)
+
+ def _get_roll(self, i, before_day_of_month, after_day_of_month):
+ n = self.n
+ is_month_end = i.is_month_end
+ if n > 0:
+ roll_end = np.where(is_month_end, 1, 0)
+ roll_before = np.where(before_day_of_month, n, n + 1)
+ roll = roll_end + roll_before
+ elif n == 0:
+ roll_after = np.where(after_day_of_month, 2, 0)
+ roll_before = np.where(~after_day_of_month, 1, 0)
+ roll = roll_before + roll_after
+ else:
+ roll = np.where(after_day_of_month, n + 2, n + 1)
+ return roll
+
+ def _apply_index_days(self, i, roll):
+ i += (roll % 2) * Timedelta(days=self.day_of_month).value
+ return i + Timedelta(days=-1)
+
+
+class SemiMonthBegin(SemiMonthOffset):
+ """
+ Two DateOffset's per month repeating on the first
+ day of the month and day_of_month.
+
+ .. versionadded:: 0.18.2
+
+ Parameters
+ ----------
+ n: int
+ normalize : bool, default False
+ day_of_month: int, {2, 3,...,27}, default 15
+ """
+ _prefix = 'SMS'
+
+ def onOffset(self, dt):
+ if self.normalize and not _is_normalized(dt):
+ return False
+ return dt.day in (1, self.day_of_month)
+
+ def _apply(self, n, other):
+ # if other.day is not day_of_month move to day_of_month and update n
+ if other.day < self.day_of_month:
+ other += relativedelta(day=self.day_of_month)
+ if n == 0:
+ n = -1
+ else:
+ n -= 1
+ elif other.day > self.day_of_month:
+ other += relativedelta(day=self.day_of_month)
+ if n == 0:
+ n = 1
+ elif n < 0:
+ n += 1
+
+ months = n // 2 + n % 2
+ day = 1 if n % 2 else self.day_of_month
+ return other + relativedelta(months=months, day=day)
+
+ def _get_roll(self, i, before_day_of_month, after_day_of_month):
+ n = self.n
+ is_month_start = i.is_month_start
+ if n > 0:
+ roll = np.where(before_day_of_month, n, n + 1)
+ elif n == 0:
+ roll_start = np.where(is_month_start, 0, 1)
+ roll_after = np.where(after_day_of_month, 1, 0)
+ roll = roll_start + roll_after
+ else:
+ roll_after = np.where(after_day_of_month, n + 2, n + 1)
+ roll_start = np.where(is_month_start, -1, 0)
+ roll = roll_after + roll_start
+ return roll
+
+ def _apply_index_days(self, i, roll):
+ return i + (roll % 2) * Timedelta(days=self.day_of_month - 1).value
+
+
class BusinessMonthEnd(MonthOffset):
"""DateOffset increments between business EOM dates"""
@@ -2720,6 +2930,8 @@ def generate_range(start=None, end=None, periods=None,
CustomBusinessHour, # 'CBH'
MonthEnd, # 'M'
MonthBegin, # 'MS'
+ SemiMonthEnd, # 'SM'
+ SemiMonthBegin, # 'SMS'
Week, # 'W'
Second, # 'S'
Minute, # 'T'
diff --git a/pandas/tseries/tests/test_frequencies.py b/pandas/tseries/tests/test_frequencies.py
index 528b9cc0b08a9..1f06b7ad4361b 100644
--- a/pandas/tseries/tests/test_frequencies.py
+++ b/pandas/tseries/tests/test_frequencies.py
@@ -52,6 +52,26 @@ def test_to_offset_multiple():
expected = offsets.Nano(2800)
assert (result == expected)
+ freqstr = '2SM'
+ result = frequencies.to_offset(freqstr)
+ expected = offsets.SemiMonthEnd(2)
+ assert (result == expected)
+
+ freqstr = '2SM-16'
+ result = frequencies.to_offset(freqstr)
+ expected = offsets.SemiMonthEnd(2, day_of_month=16)
+ assert (result == expected)
+
+ freqstr = '2SMS-14'
+ result = frequencies.to_offset(freqstr)
+ expected = offsets.SemiMonthBegin(2, day_of_month=14)
+ assert (result == expected)
+
+ freqstr = '2SMS-15'
+ result = frequencies.to_offset(freqstr)
+ expected = offsets.SemiMonthBegin(2)
+ assert (result == expected)
+
# malformed
try:
frequencies.to_offset('2h20m')
@@ -70,6 +90,14 @@ def test_to_offset_negative():
result = frequencies.to_offset(freqstr)
assert (result.n == -310)
+ freqstr = '-2SM'
+ result = frequencies.to_offset(freqstr)
+ assert (result.n == -2)
+
+ freqstr = '-1SMS'
+ result = frequencies.to_offset(freqstr)
+ assert (result.n == -1)
+
def test_to_offset_leading_zero():
freqstr = '00H 00T 01S'
@@ -137,6 +165,41 @@ def test_anchored_shortcuts():
expected = offsets.QuarterEnd(startingMonth=5)
assert (result1 == expected)
+ result1 = frequencies.to_offset('SM')
+ result2 = frequencies.to_offset('SM-15')
+ expected = offsets.SemiMonthEnd(day_of_month=15)
+ assert (result1 == expected)
+ assert (result2 == expected)
+
+ result = frequencies.to_offset('SM-1')
+ expected = offsets.SemiMonthEnd(day_of_month=1)
+ assert (result == expected)
+
+ result = frequencies.to_offset('SM-27')
+ expected = offsets.SemiMonthEnd(day_of_month=27)
+ assert (result == expected)
+
+ result = frequencies.to_offset('SMS-2')
+ expected = offsets.SemiMonthBegin(day_of_month=2)
+ assert (result == expected)
+
+ result = frequencies.to_offset('SMS-27')
+ expected = offsets.SemiMonthBegin(day_of_month=27)
+ assert (result == expected)
+
+ # ensure invalid cases fail as expected
+ invalid_anchors = ['SM-0', 'SM-28', 'SM-29',
+ 'SM-FOO', 'BSM', 'SM--1'
+ 'SMS-1', 'SMS-28', 'SMS-30',
+ 'SMS-BAR', 'BSMS', 'SMS--2']
+ for invalid_anchor in invalid_anchors:
+ try:
+ frequencies.to_offset(invalid_anchor)
+ except ValueError:
+ pass
+ else:
+ raise AssertionError(invalid_anchor)
+
def test_get_rule_month():
result = frequencies._get_rule_month('W')
diff --git a/pandas/tseries/tests/test_offsets.py b/pandas/tseries/tests/test_offsets.py
index ec88acc421cdb..5965a661699a6 100644
--- a/pandas/tseries/tests/test_offsets.py
+++ b/pandas/tseries/tests/test_offsets.py
@@ -11,9 +11,9 @@
from pandas.compat.numpy import np_datetime64_compat
from pandas.core.datetools import (bday, BDay, CDay, BQuarterEnd, BMonthEnd,
BusinessHour, CustomBusinessHour,
- CBMonthEnd, CBMonthBegin,
- BYearEnd, MonthEnd, MonthBegin, BYearBegin,
- QuarterBegin,
+ CBMonthEnd, CBMonthBegin, BYearEnd,
+ MonthEnd, MonthBegin, SemiMonthBegin,
+ SemiMonthEnd, BYearBegin, QuarterBegin,
BQuarterBegin, BMonthBegin, DateOffset,
Week, YearBegin, YearEnd, Hour, Minute,
Second, Day, Micro, Milli, Nano, Easter,
@@ -21,6 +21,7 @@
QuarterEnd, to_datetime, normalize_date,
get_offset, get_standard_freq)
+from pandas.core.series import Series
from pandas.tseries.frequencies import (_offset_map, get_freq_code,
_get_freq_str)
from pandas.tseries.index import _to_m8, DatetimeIndex, _daterange_cache
@@ -182,6 +183,8 @@ def setUp(self):
'BusinessMonthBegin':
Timestamp('2011-01-03 09:00:00'),
'MonthEnd': Timestamp('2011-01-31 09:00:00'),
+ 'SemiMonthEnd': Timestamp('2011-01-15 09:00:00'),
+ 'SemiMonthBegin': Timestamp('2011-01-15 09:00:00'),
'BusinessMonthEnd': Timestamp('2011-01-31 09:00:00'),
'YearBegin': Timestamp('2012-01-01 09:00:00'),
'BYearBegin': Timestamp('2011-01-03 09:00:00'),
@@ -311,9 +314,9 @@ def test_rollforward(self):
expecteds = self.expecteds.copy()
# result will not be changed if the target is on the offset
- no_changes = ['Day', 'MonthBegin', 'YearBegin', 'Week', 'Hour',
- 'Minute', 'Second', 'Milli', 'Micro', 'Nano',
- 'DateOffset']
+ no_changes = ['Day', 'MonthBegin', 'SemiMonthBegin', 'YearBegin',
+ 'Week', 'Hour', 'Minute', 'Second', 'Milli', 'Micro',
+ 'Nano', 'DateOffset']
for n in no_changes:
expecteds[n] = Timestamp('2011/01/01 09:00')
@@ -328,6 +331,7 @@ def test_rollforward(self):
normalized = {'Day': Timestamp('2011-01-02 00:00:00'),
'DateOffset': Timestamp('2011-01-02 00:00:00'),
'MonthBegin': Timestamp('2011-02-01 00:00:00'),
+ 'SemiMonthBegin': Timestamp('2011-01-15 00:00:00'),
'YearBegin': Timestamp('2012-01-01 00:00:00'),
'Week': Timestamp('2011-01-08 00:00:00'),
'Hour': Timestamp('2011-01-01 00:00:00'),
@@ -358,6 +362,7 @@ def test_rollback(self):
Timestamp('2010-12-01 09:00:00'),
'BusinessMonthBegin': Timestamp('2010-12-01 09:00:00'),
'MonthEnd': Timestamp('2010-12-31 09:00:00'),
+ 'SemiMonthEnd': Timestamp('2010-12-31 09:00:00'),
'BusinessMonthEnd': Timestamp('2010-12-31 09:00:00'),
'BYearBegin': Timestamp('2010-01-01 09:00:00'),
'YearEnd': Timestamp('2010-12-31 09:00:00'),
@@ -375,8 +380,9 @@ def test_rollback(self):
'Easter': Timestamp('2010-04-04 09:00:00')}
# result will not be changed if the target is on the offset
- for n in ['Day', 'MonthBegin', 'YearBegin', 'Week', 'Hour', 'Minute',
- 'Second', 'Milli', 'Micro', 'Nano', 'DateOffset']:
+ for n in ['Day', 'MonthBegin', 'SemiMonthBegin', 'YearBegin', 'Week',
+ 'Hour', 'Minute', 'Second', 'Milli', 'Micro', 'Nano',
+ 'DateOffset']:
expecteds[n] = Timestamp('2011/01/01 09:00')
# but be changed when normalize=True
@@ -387,6 +393,7 @@ def test_rollback(self):
normalized = {'Day': Timestamp('2010-12-31 00:00:00'),
'DateOffset': Timestamp('2010-12-31 00:00:00'),
'MonthBegin': Timestamp('2010-12-01 00:00:00'),
+ 'SemiMonthBegin': Timestamp('2010-12-15 00:00:00'),
'YearBegin': Timestamp('2010-01-01 00:00:00'),
'Week': Timestamp('2010-12-25 00:00:00'),
'Hour': Timestamp('2011-01-01 00:00:00'),
@@ -2646,6 +2653,353 @@ def test_onOffset(self):
assertOnOffset(offset, dt, expected)
+class TestSemiMonthEnd(Base):
+ _offset = SemiMonthEnd
+
+ def _get_tests(self):
+ tests = []
+
+ tests.append((SemiMonthEnd(),
+ {datetime(2008, 1, 1): datetime(2008, 1, 15),
+ datetime(2008, 1, 15): datetime(2008, 1, 31),
+ datetime(2008, 1, 31): datetime(2008, 2, 15),
+ datetime(2006, 12, 14): datetime(2006, 12, 15),
+ datetime(2006, 12, 29): datetime(2006, 12, 31),
+ datetime(2006, 12, 31): datetime(2007, 1, 15),
+ datetime(2007, 1, 1): datetime(2007, 1, 15),
+ datetime(2006, 12, 1): datetime(2006, 12, 15),
+ datetime(2006, 12, 15): datetime(2006, 12, 31)}))
+
+ tests.append((SemiMonthEnd(day_of_month=20),
+ {datetime(2008, 1, 1): datetime(2008, 1, 20),
+ datetime(2008, 1, 15): datetime(2008, 1, 20),
+ datetime(2008, 1, 21): datetime(2008, 1, 31),
+ datetime(2008, 1, 31): datetime(2008, 2, 20),
+ datetime(2006, 12, 14): datetime(2006, 12, 20),
+ datetime(2006, 12, 29): datetime(2006, 12, 31),
+ datetime(2006, 12, 31): datetime(2007, 1, 20),
+ datetime(2007, 1, 1): datetime(2007, 1, 20),
+ datetime(2006, 12, 1): datetime(2006, 12, 20),
+ datetime(2006, 12, 15): datetime(2006, 12, 20)}))
+
+ tests.append((SemiMonthEnd(0),
+ {datetime(2008, 1, 1): datetime(2008, 1, 15),
+ datetime(2008, 1, 16): datetime(2008, 1, 31),
+ datetime(2008, 1, 15): datetime(2008, 1, 15),
+ datetime(2008, 1, 31): datetime(2008, 1, 31),
+ datetime(2006, 12, 29): datetime(2006, 12, 31),
+ datetime(2006, 12, 31): datetime(2006, 12, 31),
+ datetime(2007, 1, 1): datetime(2007, 1, 15)}))
+
+ tests.append((SemiMonthEnd(0, day_of_month=16),
+ {datetime(2008, 1, 1): datetime(2008, 1, 16),
+ datetime(2008, 1, 16): datetime(2008, 1, 16),
+ datetime(2008, 1, 15): datetime(2008, 1, 16),
+ datetime(2008, 1, 31): datetime(2008, 1, 31),
+ datetime(2006, 12, 29): datetime(2006, 12, 31),
+ datetime(2006, 12, 31): datetime(2006, 12, 31),
+ datetime(2007, 1, 1): datetime(2007, 1, 16)}))
+
+ tests.append((SemiMonthEnd(2),
+ {datetime(2008, 1, 1): datetime(2008, 1, 31),
+ datetime(2008, 1, 31): datetime(2008, 2, 29),
+ datetime(2006, 12, 29): datetime(2007, 1, 15),
+ datetime(2006, 12, 31): datetime(2007, 1, 31),
+ datetime(2007, 1, 1): datetime(2007, 1, 31),
+ datetime(2007, 1, 16): datetime(2007, 2, 15),
+ datetime(2006, 11, 1): datetime(2006, 11, 30)}))
+
+ tests.append((SemiMonthEnd(-1),
+ {datetime(2007, 1, 1): datetime(2006, 12, 31),
+ datetime(2008, 6, 30): datetime(2008, 6, 15),
+ datetime(2008, 12, 31): datetime(2008, 12, 15),
+ datetime(2006, 12, 29): datetime(2006, 12, 15),
+ datetime(2006, 12, 30): datetime(2006, 12, 15),
+ datetime(2007, 1, 1): datetime(2006, 12, 31)}))
+
+ tests.append((SemiMonthEnd(-1, day_of_month=4),
+ {datetime(2007, 1, 1): datetime(2006, 12, 31),
+ datetime(2007, 1, 4): datetime(2006, 12, 31),
+ datetime(2008, 6, 30): datetime(2008, 6, 4),
+ datetime(2008, 12, 31): datetime(2008, 12, 4),
+ datetime(2006, 12, 5): datetime(2006, 12, 4),
+ datetime(2006, 12, 30): datetime(2006, 12, 4),
+ datetime(2007, 1, 1): datetime(2006, 12, 31)}))
+
+ tests.append((SemiMonthEnd(-2),
+ {datetime(2007, 1, 1): datetime(2006, 12, 15),
+ datetime(2008, 6, 30): datetime(2008, 5, 31),
+ datetime(2008, 3, 15): datetime(2008, 2, 15),
+ datetime(2008, 12, 31): datetime(2008, 11, 30),
+ datetime(2006, 12, 29): datetime(2006, 11, 30),
+ datetime(2006, 12, 14): datetime(2006, 11, 15),
+ datetime(2007, 1, 1): datetime(2006, 12, 15)}))
+
+ return tests
+
+ def test_offset_whole_year(self):
+ dates = (datetime(2007, 12, 31),
+ datetime(2008, 1, 15),
+ datetime(2008, 1, 31),
+ datetime(2008, 2, 15),
+ datetime(2008, 2, 29),
+ datetime(2008, 3, 15),
+ datetime(2008, 3, 31),
+ datetime(2008, 4, 15),
+ datetime(2008, 4, 30),
+ datetime(2008, 5, 15),
+ datetime(2008, 5, 31),
+ datetime(2008, 6, 15),
+ datetime(2008, 6, 30),
+ datetime(2008, 7, 15),
+ datetime(2008, 7, 31),
+ datetime(2008, 8, 15),
+ datetime(2008, 8, 31),
+ datetime(2008, 9, 15),
+ datetime(2008, 9, 30),
+ datetime(2008, 10, 15),
+ datetime(2008, 10, 31),
+ datetime(2008, 11, 15),
+ datetime(2008, 11, 30),
+ datetime(2008, 12, 15),
+ datetime(2008, 12, 31))
+
+ for base, exp_date in zip(dates[:-1], dates[1:]):
+ assertEq(SemiMonthEnd(), base, exp_date)
+
+ # ensure .apply_index works as expected
+ s = DatetimeIndex(dates[:-1])
+ result = SemiMonthEnd().apply_index(s)
+ exp = DatetimeIndex(dates[1:])
+ tm.assert_index_equal(result, exp)
+
+ # ensure generating a range with DatetimeIndex gives same result
+ result = DatetimeIndex(start=dates[0], end=dates[-1], freq='SM')
+ exp = DatetimeIndex(dates)
+ tm.assert_index_equal(result, exp)
+
+ def test_offset(self):
+ for offset, cases in self._get_tests():
+ for base, expected in compat.iteritems(cases):
+ assertEq(offset, base, expected)
+
+ def test_apply_index(self):
+ for offset, cases in self._get_tests():
+ s = DatetimeIndex(cases.keys())
+ result = offset.apply_index(s)
+ exp = DatetimeIndex(cases.values())
+ tm.assert_index_equal(result, exp)
+
+ def test_onOffset(self):
+
+ tests = [(datetime(2007, 12, 31), True),
+ (datetime(2007, 12, 15), True),
+ (datetime(2007, 12, 14), False),
+ (datetime(2007, 12, 1), False),
+ (datetime(2008, 2, 29), True)]
+
+ for dt, expected in tests:
+ assertOnOffset(SemiMonthEnd(), dt, expected)
+
+ def test_vectorized_offset_addition(self):
+ for klass, assert_func in zip([Series, DatetimeIndex],
+ [self.assert_series_equal,
+ tm.assert_index_equal]):
+ s = klass([Timestamp('2000-01-15 00:15:00', tz='US/Central'),
+ Timestamp('2000-02-15', tz='US/Central')], name='a')
+
+ result = s + SemiMonthEnd()
+ result2 = SemiMonthEnd() + s
+ exp = klass([Timestamp('2000-01-31 00:15:00', tz='US/Central'),
+ Timestamp('2000-02-29', tz='US/Central')], name='a')
+ assert_func(result, exp)
+ assert_func(result2, exp)
+
+ s = klass([Timestamp('2000-01-01 00:15:00', tz='US/Central'),
+ Timestamp('2000-02-01', tz='US/Central')], name='a')
+ result = s + SemiMonthEnd()
+ result2 = SemiMonthEnd() + s
+ exp = klass([Timestamp('2000-01-15 00:15:00', tz='US/Central'),
+ Timestamp('2000-02-15', tz='US/Central')], name='a')
+ assert_func(result, exp)
+ assert_func(result2, exp)
+
+
+class TestSemiMonthBegin(Base):
+ _offset = SemiMonthBegin
+
+ def _get_tests(self):
+ tests = []
+
+ tests.append((SemiMonthBegin(),
+ {datetime(2008, 1, 1): datetime(2008, 1, 15),
+ datetime(2008, 1, 15): datetime(2008, 2, 1),
+ datetime(2008, 1, 31): datetime(2008, 2, 1),
+ datetime(2006, 12, 14): datetime(2006, 12, 15),
+ datetime(2006, 12, 29): datetime(2007, 1, 1),
+ datetime(2006, 12, 31): datetime(2007, 1, 1),
+ datetime(2007, 1, 1): datetime(2007, 1, 15),
+ datetime(2006, 12, 1): datetime(2006, 12, 15),
+ datetime(2006, 12, 15): datetime(2007, 1, 1)}))
+
+ tests.append((SemiMonthBegin(day_of_month=20),
+ {datetime(2008, 1, 1): datetime(2008, 1, 20),
+ datetime(2008, 1, 15): datetime(2008, 1, 20),
+ datetime(2008, 1, 21): datetime(2008, 2, 1),
+ datetime(2008, 1, 31): datetime(2008, 2, 1),
+ datetime(2006, 12, 14): datetime(2006, 12, 20),
+ datetime(2006, 12, 29): datetime(2007, 1, 1),
+ datetime(2006, 12, 31): datetime(2007, 1, 1),
+ datetime(2007, 1, 1): datetime(2007, 1, 20),
+ datetime(2006, 12, 1): datetime(2006, 12, 20),
+ datetime(2006, 12, 15): datetime(2006, 12, 20)}))
+
+ tests.append((SemiMonthBegin(0),
+ {datetime(2008, 1, 1): datetime(2008, 1, 1),
+ datetime(2008, 1, 16): datetime(2008, 2, 1),
+ datetime(2008, 1, 15): datetime(2008, 1, 15),
+ datetime(2008, 1, 31): datetime(2008, 2, 1),
+ datetime(2006, 12, 29): datetime(2007, 1, 1),
+ datetime(2006, 12, 2): datetime(2006, 12, 15),
+ datetime(2007, 1, 1): datetime(2007, 1, 1)}))
+
+ tests.append((SemiMonthBegin(0, day_of_month=16),
+ {datetime(2008, 1, 1): datetime(2008, 1, 1),
+ datetime(2008, 1, 16): datetime(2008, 1, 16),
+ datetime(2008, 1, 15): datetime(2008, 1, 16),
+ datetime(2008, 1, 31): datetime(2008, 2, 1),
+ datetime(2006, 12, 29): datetime(2007, 1, 1),
+ datetime(2006, 12, 31): datetime(2007, 1, 1),
+ datetime(2007, 1, 5): datetime(2007, 1, 16),
+ datetime(2007, 1, 1): datetime(2007, 1, 1)}))
+
+ tests.append((SemiMonthBegin(2),
+ {datetime(2008, 1, 1): datetime(2008, 2, 1),
+ datetime(2008, 1, 31): datetime(2008, 2, 15),
+ datetime(2006, 12, 1): datetime(2007, 1, 1),
+ datetime(2006, 12, 29): datetime(2007, 1, 15),
+ datetime(2006, 12, 15): datetime(2007, 1, 15),
+ datetime(2007, 1, 1): datetime(2007, 2, 1),
+ datetime(2007, 1, 16): datetime(2007, 2, 15),
+ datetime(2006, 11, 1): datetime(2006, 12, 1)}))
+
+ tests.append((SemiMonthBegin(-1),
+ {datetime(2007, 1, 1): datetime(2006, 12, 15),
+ datetime(2008, 6, 30): datetime(2008, 6, 15),
+ datetime(2008, 6, 14): datetime(2008, 6, 1),
+ datetime(2008, 12, 31): datetime(2008, 12, 15),
+ datetime(2006, 12, 29): datetime(2006, 12, 15),
+ datetime(2006, 12, 15): datetime(2006, 12, 1),
+ datetime(2007, 1, 1): datetime(2006, 12, 15)}))
+
+ tests.append((SemiMonthBegin(-1, day_of_month=4),
+ {datetime(2007, 1, 1): datetime(2006, 12, 4),
+ datetime(2007, 1, 4): datetime(2007, 1, 1),
+ datetime(2008, 6, 30): datetime(2008, 6, 4),
+ datetime(2008, 12, 31): datetime(2008, 12, 4),
+ datetime(2006, 12, 5): datetime(2006, 12, 4),
+ datetime(2006, 12, 30): datetime(2006, 12, 4),
+ datetime(2006, 12, 2): datetime(2006, 12, 1),
+ datetime(2007, 1, 1): datetime(2006, 12, 4)}))
+
+ tests.append((SemiMonthBegin(-2),
+ {datetime(2007, 1, 1): datetime(2006, 12, 1),
+ datetime(2008, 6, 30): datetime(2008, 6, 1),
+ datetime(2008, 6, 14): datetime(2008, 5, 15),
+ datetime(2008, 12, 31): datetime(2008, 12, 1),
+ datetime(2006, 12, 29): datetime(2006, 12, 1),
+ datetime(2006, 12, 15): datetime(2006, 11, 15),
+ datetime(2007, 1, 1): datetime(2006, 12, 1)}))
+
+ return tests
+
+ def test_offset_whole_year(self):
+ dates = (datetime(2007, 12, 15),
+ datetime(2008, 1, 1),
+ datetime(2008, 1, 15),
+ datetime(2008, 2, 1),
+ datetime(2008, 2, 15),
+ datetime(2008, 3, 1),
+ datetime(2008, 3, 15),
+ datetime(2008, 4, 1),
+ datetime(2008, 4, 15),
+ datetime(2008, 5, 1),
+ datetime(2008, 5, 15),
+ datetime(2008, 6, 1),
+ datetime(2008, 6, 15),
+ datetime(2008, 7, 1),
+ datetime(2008, 7, 15),
+ datetime(2008, 8, 1),
+ datetime(2008, 8, 15),
+ datetime(2008, 9, 1),
+ datetime(2008, 9, 15),
+ datetime(2008, 10, 1),
+ datetime(2008, 10, 15),
+ datetime(2008, 11, 1),
+ datetime(2008, 11, 15),
+ datetime(2008, 12, 1),
+ datetime(2008, 12, 15))
+
+ for base, exp_date in zip(dates[:-1], dates[1:]):
+ assertEq(SemiMonthBegin(), base, exp_date)
+
+ # ensure .apply_index works as expected
+ s = DatetimeIndex(dates[:-1])
+ result = SemiMonthBegin().apply_index(s)
+ exp = DatetimeIndex(dates[1:])
+ tm.assert_index_equal(result, exp)
+
+ # ensure generating a range with DatetimeIndex gives same result
+ result = DatetimeIndex(start=dates[0], end=dates[-1], freq='SMS')
+ exp = DatetimeIndex(dates)
+ tm.assert_index_equal(result, exp)
+
+ def test_offset(self):
+ for offset, cases in self._get_tests():
+ for base, expected in compat.iteritems(cases):
+ assertEq(offset, base, expected)
+
+ def test_apply_index(self):
+ for offset, cases in self._get_tests():
+ s = DatetimeIndex(cases.keys())
+ result = offset.apply_index(s)
+ exp = DatetimeIndex(cases.values())
+ tm.assert_index_equal(result, exp)
+
+ def test_onOffset(self):
+ tests = [(datetime(2007, 12, 1), True),
+ (datetime(2007, 12, 15), True),
+ (datetime(2007, 12, 14), False),
+ (datetime(2007, 12, 31), False),
+ (datetime(2008, 2, 15), True)]
+
+ for dt, expected in tests:
+ assertOnOffset(SemiMonthBegin(), dt, expected)
+
+ def test_vectorized_offset_addition(self):
+ for klass, assert_func in zip([Series, DatetimeIndex],
+ [self.assert_series_equal,
+ tm.assert_index_equal]):
+
+ s = klass([Timestamp('2000-01-15 00:15:00', tz='US/Central'),
+ Timestamp('2000-02-15', tz='US/Central')], name='a')
+ result = s + SemiMonthBegin()
+ result2 = SemiMonthBegin() + s
+ exp = klass([Timestamp('2000-02-01 00:15:00', tz='US/Central'),
+ Timestamp('2000-03-01', tz='US/Central')], name='a')
+ assert_func(result, exp)
+ assert_func(result2, exp)
+
+ s = klass([Timestamp('2000-01-01 00:15:00', tz='US/Central'),
+ Timestamp('2000-02-01', tz='US/Central')], name='a')
+ result = s + SemiMonthBegin()
+ result2 = SemiMonthBegin() + s
+ exp = klass([Timestamp('2000-01-15 00:15:00', tz='US/Central'),
+ Timestamp('2000-02-15', tz='US/Central')], name='a')
+ assert_func(result, exp)
+ assert_func(result2, exp)
+
+
class TestBQuarterBegin(Base):
_offset = BQuarterBegin
@@ -4537,6 +4891,8 @@ def test_all_offset_classes(self):
BMonthEnd: ['11/2/2012', '11/30/2012'],
CBMonthBegin: ['11/2/2012', '12/3/2012'],
CBMonthEnd: ['11/2/2012', '11/30/2012'],
+ SemiMonthBegin: ['11/2/2012', '11/15/2012'],
+ SemiMonthEnd: ['11/2/2012', '11/15/2012'],
Week: ['11/2/2012', '11/9/2012'],
YearBegin: ['11/2/2012', '1/1/2013'],
YearEnd: ['11/2/2012', '12/31/2012'],
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index f6d80f7ee410b..fcc544ec7f239 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -3095,10 +3095,14 @@ def test_datetime64_with_DateOffset(self):
exp = klass([Timestamp('2001-1-1'), Timestamp('2001-2-1')])
assert_func(result, exp)
- s = klass([Timestamp('2000-01-05 00:15:00'), Timestamp(
- '2000-01-31 00:23:00'), Timestamp('2000-01-01'), Timestamp(
- '2000-03-31'), Timestamp('2000-02-29'), Timestamp(
- '2000-12-31')])
+ s = klass([Timestamp('2000-01-05 00:15:00'),
+ Timestamp('2000-01-31 00:23:00'),
+ Timestamp('2000-01-01'),
+ Timestamp('2000-03-31'),
+ Timestamp('2000-02-29'),
+ Timestamp('2000-12-31'),
+ Timestamp('2000-05-15'),
+ Timestamp('2001-06-15')])
# DateOffset relativedelta fastpath
relative_kwargs = [('years', 2), ('months', 5), ('days', 3),
@@ -3115,6 +3119,7 @@ def test_datetime64_with_DateOffset(self):
# assert these are equal on a piecewise basis
offsets = ['YearBegin', ('YearBegin', {'month': 5}), 'YearEnd',
('YearEnd', {'month': 5}), 'MonthBegin', 'MonthEnd',
+ 'SemiMonthEnd', 'SemiMonthBegin',
'Week', ('Week', {
'weekday': 3
}), 'BusinessDay', 'BDay', 'QuarterEnd', 'QuarterBegin',
| - [x] closes #1543
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/13315 | 2016-05-29T06:00:25Z | 2016-06-14T21:22:16Z | null | 2016-06-14T22:18:52Z |
BUG: Check for NaN after data conversion to numeric | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index dfb5ebc9379b1..262ad9773b71f 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -291,6 +291,7 @@ Bug Fixes
+- Bug in ``pd.read_csv()`` with ``engine='python'`` in which ``NaN`` values weren't being detected after data was converted to numeric values (:issue:`13314`)
- Bug in ``MultiIndex`` slicing where extra elements were returned when level is non-unique (:issue:`12896`)
diff --git a/pandas/io/tests/parser/na_values.py b/pandas/io/tests/parser/na_values.py
index c34549835cb46..b03ae4ae9fc22 100644
--- a/pandas/io/tests/parser/na_values.py
+++ b/pandas/io/tests/parser/na_values.py
@@ -11,7 +11,7 @@
import pandas.io.parsers as parsers
import pandas.util.testing as tm
-from pandas import DataFrame, MultiIndex, read_csv
+from pandas import DataFrame, MultiIndex
from pandas.compat import StringIO, range
@@ -43,57 +43,30 @@ def test_detect_string_na(self):
tm.assert_numpy_array_equal(df.values, expected)
def test_non_string_na_values(self):
- # see gh-3611, na_values that are not a string are an issue
- with tm.ensure_clean('__non_string_na_values__.csv') as path:
- df = DataFrame({'A': [-999, 2, 3], 'B': [1.2, -999, 4.5]})
- df.to_csv(path, sep=' ', index=False)
- result1 = self.read_csv(path, sep=' ', header=0,
- na_values=['-999.0', '-999'])
- result2 = self.read_csv(path, sep=' ', header=0,
- na_values=[-999, -999.0])
- result3 = self.read_csv(path, sep=' ', header=0,
- na_values=[-999.0, -999])
- tm.assert_frame_equal(result1, result2)
- tm.assert_frame_equal(result2, result3)
-
- result4 = self.read_csv(
- path, sep=' ', header=0, na_values=['-999.0'])
- result5 = self.read_csv(
- path, sep=' ', header=0, na_values=['-999'])
- result6 = self.read_csv(
- path, sep=' ', header=0, na_values=[-999.0])
- result7 = self.read_csv(
- path, sep=' ', header=0, na_values=[-999])
- tm.assert_frame_equal(result4, result3)
- tm.assert_frame_equal(result5, result3)
- tm.assert_frame_equal(result6, result3)
- tm.assert_frame_equal(result7, result3)
-
- good_compare = result3
-
- # with an odd float format, so we can't match the string 999.0
- # exactly, but need float matching
- # TODO: change these to self.read_csv when Python bug is squashed
- df.to_csv(path, sep=' ', index=False, float_format='%.3f')
- result1 = read_csv(path, sep=' ', header=0,
- na_values=['-999.0', '-999'])
- result2 = read_csv(path, sep=' ', header=0,
- na_values=[-999.0, -999])
- tm.assert_frame_equal(result1, good_compare)
- tm.assert_frame_equal(result2, good_compare)
-
- result3 = read_csv(path, sep=' ',
- header=0, na_values=['-999.0'])
- result4 = read_csv(path, sep=' ',
- header=0, na_values=['-999'])
- result5 = read_csv(path, sep=' ',
- header=0, na_values=[-999.0])
- result6 = read_csv(path, sep=' ',
- header=0, na_values=[-999])
- tm.assert_frame_equal(result3, good_compare)
- tm.assert_frame_equal(result4, good_compare)
- tm.assert_frame_equal(result5, good_compare)
- tm.assert_frame_equal(result6, good_compare)
+ # see gh-3611: with an odd float format, we can't match
+ # the string '999.0' exactly but still need float matching
+ nice = """A,B
+-999,1.2
+2,-999
+3,4.5
+"""
+ ugly = """A,B
+-999,1.200
+2,-999.000
+3,4.500
+"""
+ na_values_param = [['-999.0', '-999'],
+ [-999, -999.0],
+ [-999.0, -999],
+ ['-999.0'], ['-999'],
+ [-999.0], [-999]]
+ expected = DataFrame([[np.nan, 1.2], [2.0, np.nan],
+ [3.0, 4.5]], columns=['A', 'B'])
+
+ for data in (nice, ugly):
+ for na_values in na_values_param:
+ out = self.read_csv(StringIO(data), na_values=na_values)
+ tm.assert_frame_equal(out, expected)
def test_default_na_values(self):
_NA_VALUES = set(['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN',
diff --git a/pandas/src/inference.pyx b/pandas/src/inference.pyx
index 3ccc1c4f9336c..e2c59a34bdf21 100644
--- a/pandas/src/inference.pyx
+++ b/pandas/src/inference.pyx
@@ -596,7 +596,13 @@ def maybe_convert_numeric(object[:] values, set na_values,
else:
try:
status = floatify(val, &fval, &maybe_int)
- floats[i] = fval
+
+ if fval in na_values:
+ floats[i] = complexes[i] = nan
+ seen_float = True
+ else:
+ floats[i] = fval
+
if not seen_float:
if maybe_int:
as_int = int(val)
diff --git a/pandas/tests/test_lib.py b/pandas/tests/test_lib.py
index 2aa31063df446..c6a703673a4c4 100644
--- a/pandas/tests/test_lib.py
+++ b/pandas/tests/test_lib.py
@@ -188,6 +188,9 @@ def test_isinf_scalar(self):
self.assertFalse(lib.isneginf_scalar(1))
self.assertFalse(lib.isneginf_scalar('a'))
+
+# tests related to functions imported from inference.pyx
+class TestInference(tm.TestCase):
def test_maybe_convert_numeric_infinities(self):
# see gh-13274
infinities = ['inf', 'inF', 'iNf', 'Inf',
@@ -227,6 +230,16 @@ def test_maybe_convert_numeric_infinities(self):
np.array(['foo_' + infinity], dtype=object),
na_values, maybe_int)
+ def test_maybe_convert_numeric_post_floatify_nan(self):
+ # see gh-13314
+ data = np.array(['1.200', '-999.000', '4.500'], dtype=object)
+ expected = np.array([1.2, np.nan, 4.5], dtype=np.float64)
+ nan_values = set([-999, -999.0])
+
+ for coerce_type in (True, False):
+ out = lib.maybe_convert_numeric(data, nan_values, coerce_type)
+ tm.assert_numpy_array_equal(out, expected)
+
class Testisscalar(tm.TestCase):
| In an attempt to squash a Python parser bug in which weirdly-formed floats weren't being checked for `nan`, the bug was traced back to a bug in the `maybe_convert_numeric` function of `pandas/src/inference.pyx`. Added tests for the bug in `test_lib.py` and adjusted the original `nan` tests in `na_values.py` to test all of the engines.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13314 | 2016-05-28T22:45:50Z | 2016-05-30T13:44:38Z | null | 2016-05-30T14:10:14Z |
Fix #13306: Hour overflow in tz-aware datetime conversions. | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 27540a9626398..4965dc01b9e88 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -338,7 +338,7 @@ Bug Fixes
- Bug in ``.resample(..)`` with a ``PeriodIndex`` not changing its ``freq`` appropriately when empty (:issue:`13067`)
- Bug in ``.resample(..)`` with a ``PeriodIndex`` not retaining its type or name with an empty ``DataFrame``appropriately when empty (:issue:`13212`)
- Bug in ``groupby(..).resample(..)`` where passing some keywords would raise an exception (:issue:`13235`)
-
+- Bug in ``.tz-convert`` tz-aware ``DateTimeIndex`` relies on index being sorted for correct results (:issue: `13306`)
diff --git a/pandas/tseries/tests/test_timezones.py b/pandas/tseries/tests/test_timezones.py
index b80ee4c5c1e39..afe9d0652db19 100644
--- a/pandas/tseries/tests/test_timezones.py
+++ b/pandas/tseries/tests/test_timezones.py
@@ -902,6 +902,88 @@ def test_utc_with_system_utc(self):
# check that the time hasn't changed.
self.assertEqual(ts, ts.tz_convert(dateutil.tz.tzutc()))
+ def test_tz_convert_hour_overflow_dst(self):
+ # Regression test for:
+ # https://github.com/pydata/pandas/issues/13306
+
+ # sorted case US/Eastern -> UTC
+ ts = ['2008-05-12 09:50:00',
+ '2008-12-12 09:50:35',
+ '2009-05-12 09:50:32']
+ tt = to_datetime(ts).tz_localize('US/Eastern')
+ ut = tt.tz_convert('UTC')
+ expected = np.array([13, 14, 13], dtype=np.int32)
+ self.assert_numpy_array_equal(ut.hour, expected)
+
+ # sorted case UTC -> US/Eastern
+ ts = ['2008-05-12 13:50:00',
+ '2008-12-12 14:50:35',
+ '2009-05-12 13:50:32']
+ tt = to_datetime(ts).tz_localize('UTC')
+ ut = tt.tz_convert('US/Eastern')
+ expected = np.array([9, 9, 9], dtype=np.int32)
+ self.assert_numpy_array_equal(ut.hour, expected)
+
+ # unsorted case US/Eastern -> UTC
+ ts = ['2008-05-12 09:50:00',
+ '2008-12-12 09:50:35',
+ '2008-05-12 09:50:32']
+ tt = to_datetime(ts).tz_localize('US/Eastern')
+ ut = tt.tz_convert('UTC')
+ expected = np.array([13, 14, 13], dtype=np.int32)
+ self.assert_numpy_array_equal(ut.hour, expected)
+
+ # unsorted case UTC -> US/Eastern
+ ts = ['2008-05-12 13:50:00',
+ '2008-12-12 14:50:35',
+ '2008-05-12 13:50:32']
+ tt = to_datetime(ts).tz_localize('UTC')
+ ut = tt.tz_convert('US/Eastern')
+ expected = np.array([9, 9, 9], dtype=np.int32)
+ self.assert_numpy_array_equal(ut.hour, expected)
+
+ def test_tz_convert_hour_overflow_dst_timestamps(self):
+ # Regression test for:
+ # https://github.com/pydata/pandas/issues/13306
+
+ tz = self.tzstr('US/Eastern')
+
+ # sorted case US/Eastern -> UTC
+ ts = [Timestamp('2008-05-12 09:50:00', tz=tz),
+ Timestamp('2008-12-12 09:50:35', tz=tz),
+ Timestamp('2009-05-12 09:50:32', tz=tz)]
+ tt = to_datetime(ts)
+ ut = tt.tz_convert('UTC')
+ expected = np.array([13, 14, 13], dtype=np.int32)
+ self.assert_numpy_array_equal(ut.hour, expected)
+
+ # sorted case UTC -> US/Eastern
+ ts = [Timestamp('2008-05-12 13:50:00', tz='UTC'),
+ Timestamp('2008-12-12 14:50:35', tz='UTC'),
+ Timestamp('2009-05-12 13:50:32', tz='UTC')]
+ tt = to_datetime(ts)
+ ut = tt.tz_convert('US/Eastern')
+ expected = np.array([9, 9, 9], dtype=np.int32)
+ self.assert_numpy_array_equal(ut.hour, expected)
+
+ # unsorted case US/Eastern -> UTC
+ ts = [Timestamp('2008-05-12 09:50:00', tz=tz),
+ Timestamp('2008-12-12 09:50:35', tz=tz),
+ Timestamp('2008-05-12 09:50:32', tz=tz)]
+ tt = to_datetime(ts)
+ ut = tt.tz_convert('UTC')
+ expected = np.array([13, 14, 13], dtype=np.int32)
+ self.assert_numpy_array_equal(ut.hour, expected)
+
+ # unsorted case UTC -> US/Eastern
+ ts = [Timestamp('2008-05-12 13:50:00', tz='UTC'),
+ Timestamp('2008-12-12 14:50:35', tz='UTC'),
+ Timestamp('2008-05-12 13:50:32', tz='UTC')]
+ tt = to_datetime(ts)
+ ut = tt.tz_convert('US/Eastern')
+ expected = np.array([9, 9, 9], dtype=np.int32)
+ self.assert_numpy_array_equal(ut.hour, expected)
+
def test_tslib_tz_convert_trans_pos_plus_1__bug(self):
# Regression test for tslib.tz_convert(vals, tz1, tz2).
# See https://github.com/pydata/pandas/issues/4496 for details.
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index b3fb4989b2f23..6453e65ecdc81 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -3754,8 +3754,8 @@ except:
def tz_convert(ndarray[int64_t] vals, object tz1, object tz2):
cdef:
- ndarray[int64_t] utc_dates, tt, result, trans, deltas
- Py_ssize_t i, pos, n = len(vals)
+ ndarray[int64_t] utc_dates, tt, result, trans, deltas, posn
+ Py_ssize_t i, j, pos, n = len(vals)
int64_t v, offset
pandas_datetimestruct dts
Py_ssize_t trans_len
@@ -3791,19 +3791,18 @@ def tz_convert(ndarray[int64_t] vals, object tz1, object tz2):
return vals
trans_len = len(trans)
- pos = trans.searchsorted(tt[0]) - 1
- if pos < 0:
- raise ValueError('First time before start of DST info')
-
- offset = deltas[pos]
+ posn = trans.searchsorted(tt, side='right')
+ j = 0
for i in range(n):
v = vals[i]
if v == NPY_NAT:
utc_dates[i] = NPY_NAT
else:
- while pos + 1 < trans_len and v >= trans[pos + 1]:
- pos += 1
- offset = deltas[pos]
+ pos = posn[j] - 1
+ j = j + 1
+ if pos < 0:
+ raise ValueError('First time before start of DST info')
+ offset = deltas[pos]
utc_dates[i] = v - offset
else:
utc_dates = vals
@@ -3838,20 +3837,18 @@ def tz_convert(ndarray[int64_t] vals, object tz1, object tz2):
if (result==NPY_NAT).all():
return result
- pos = trans.searchsorted(utc_dates[utc_dates!=NPY_NAT][0]) - 1
- if pos < 0:
- raise ValueError('First time before start of DST info')
-
- # TODO: this assumed sortedness :/
- offset = deltas[pos]
+ posn = trans.searchsorted(utc_dates[utc_dates!=NPY_NAT], side='right')
+ j = 0
for i in range(n):
v = utc_dates[i]
if vals[i] == NPY_NAT:
result[i] = vals[i]
else:
- while pos + 1 < trans_len and v >= trans[pos + 1]:
- pos += 1
- offset = deltas[pos]
+ pos = posn[j] - 1
+ j = j + 1
+ if pos < 0:
+ raise ValueError('First time before start of DST info')
+ offset = deltas[pos]
result[i] = v + offset
return result
| - [x] closes #13306
- [x] tests passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
Bug: tz-converting tz-aware DateTimeIndex relied on index being sorted for correct results.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13313 | 2016-05-28T07:07:58Z | 2016-06-02T18:00:21Z | null | 2016-06-02T18:01:05Z |
DOC: Fix read_stata docstring | diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index 6c6e11a53d2d3..ae7200cf6fb2e 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -89,12 +89,14 @@
Examples
--------
Read a Stata dta file:
->> df = pandas.read_stata('filename.dta')
+
+>>> df = pandas.read_stata('filename.dta')
Read a Stata dta file in 10,000 line chunks:
->> itr = pandas.read_stata('filename.dta', chunksize=10000)
->> for chunk in itr:
->> do_something(chunk)
+
+>>> itr = pandas.read_stata('filename.dta', chunksize=10000)
+>>> for chunk in itr:
+>>> do_something(chunk)
""" % (_statafile_processing_params1, _encoding_params,
_statafile_processing_params2, _chunksize_params,
_iterator_params)
| - [x] passes `git diff upstream/master | flake8 --diff`
Found docstring example is broken:
- http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_stata.html
| https://api.github.com/repos/pandas-dev/pandas/pulls/13312 | 2016-05-28T04:51:59Z | 2016-05-29T14:45:37Z | null | 2016-05-29T22:11:33Z |
TST: Make numpy_array test strict | diff --git a/pandas/computation/tests/test_eval.py b/pandas/computation/tests/test_eval.py
index 023519fd7fc20..aaafcb5b41645 100644
--- a/pandas/computation/tests/test_eval.py
+++ b/pandas/computation/tests/test_eval.py
@@ -185,6 +185,16 @@ def test_chained_cmp_op(self):
mids, cmp_ops, self.rhses):
self.check_chained_cmp_op(lhs, cmp1, mid, cmp2, rhs)
+ def check_equal(self, result, expected):
+ if isinstance(result, DataFrame):
+ tm.assert_frame_equal(result, expected)
+ elif isinstance(result, Series):
+ tm.assert_series_equal(result, expected)
+ elif isinstance(result, np.ndarray):
+ tm.assert_numpy_array_equal(result, expected)
+ else:
+ self.assertEqual(result, expected)
+
def check_complex_cmp_op(self, lhs, cmp1, rhs, binop, cmp2):
skip_these = _scalar_skip
ex = '(lhs {cmp1} rhs) {binop} (lhs {cmp2} rhs)'.format(cmp1=cmp1,
@@ -218,7 +228,7 @@ def check_complex_cmp_op(self, lhs, cmp1, rhs, binop, cmp2):
expected = _eval_single_bin(
lhs_new, binop, rhs_new, self.engine)
result = pd.eval(ex, engine=self.engine, parser=self.parser)
- tm.assert_numpy_array_equal(result, expected)
+ self.check_equal(result, expected)
def check_chained_cmp_op(self, lhs, cmp1, mid, cmp2, rhs):
skip_these = _scalar_skip
@@ -249,7 +259,7 @@ def check_simple_cmp_op(self, lhs, cmp1, rhs):
else:
expected = _eval_single_bin(lhs, cmp1, rhs, self.engine)
result = pd.eval(ex, engine=self.engine, parser=self.parser)
- tm.assert_numpy_array_equal(result, expected)
+ self.check_equal(result, expected)
def check_binary_arith_op(self, lhs, arith1, rhs):
ex = 'lhs {0} rhs'.format(arith1)
@@ -293,7 +303,7 @@ def check_floor_division(self, lhs, arith1, rhs):
if self.engine == 'python':
res = pd.eval(ex, engine=self.engine, parser=self.parser)
expected = lhs // rhs
- tm.assert_numpy_array_equal(res, expected)
+ self.check_equal(res, expected)
else:
self.assertRaises(TypeError, pd.eval, ex, local_dict={'lhs': lhs,
'rhs': rhs},
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 1be6ce810791b..03fe71d4f5125 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -316,8 +316,8 @@ def array_equivalent(left, right, strict_nan=False):
if not strict_nan:
# pd.isnull considers NaN and None to be equivalent.
- return lib.array_equivalent_object(
- _ensure_object(left.ravel()), _ensure_object(right.ravel()))
+ return lib.array_equivalent_object(_ensure_object(left.ravel()),
+ _ensure_object(right.ravel()))
for left_value, right_value in zip(left, right):
if left_value is tslib.NaT and right_value is not tslib.NaT:
diff --git a/pandas/io/tests/json/test_pandas.py b/pandas/io/tests/json/test_pandas.py
index cad469de86fe9..43b8d6b9563f1 100644
--- a/pandas/io/tests/json/test_pandas.py
+++ b/pandas/io/tests/json/test_pandas.py
@@ -99,8 +99,8 @@ def test_frame_non_unique_index(self):
assert_frame_equal(df, read_json(df.to_json(orient='split'),
orient='split'))
unser = read_json(df.to_json(orient='records'), orient='records')
- self.assertTrue(df.columns.equals(unser.columns))
- tm.assert_numpy_array_equal(df.values, unser.values)
+ self.assert_index_equal(df.columns, unser.columns)
+ np.testing.assert_equal(df.values, unser.values)
unser = read_json(df.to_json(orient='values'), orient='values')
tm.assert_numpy_array_equal(df.values, unser.values)
@@ -183,7 +183,8 @@ def _check_orient(df, orient, dtype=None, numpy=False,
# index is not captured in this orientation
assert_almost_equal(df.values, unser.values,
check_dtype=check_numpy_dtype)
- self.assertTrue(df.columns.equals(unser.columns))
+ self.assert_index_equal(df.columns, unser.columns,
+ exact=check_column_type)
elif orient == "values":
# index and cols are not captured in this orientation
if numpy is True and df.shape == (0, 0):
@@ -302,12 +303,10 @@ def _check_all_orients(df, dtype=None, convert_axes=True,
# mixed data
index = pd.Index(['a', 'b', 'c', 'd', 'e'])
- data = {
- 'A': [0., 1., 2., 3., 4.],
- 'B': [0., 1., 0., 1., 0.],
- 'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'],
- 'D': [True, False, True, False, True]
- }
+ data = {'A': [0., 1., 2., 3., 4.],
+ 'B': [0., 1., 0., 1., 0.],
+ 'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'],
+ 'D': [True, False, True, False, True]}
df = DataFrame(data=data, index=index)
_check_orient(df, "split", check_dtype=False)
_check_orient(df, "records", check_dtype=False)
diff --git a/pandas/io/tests/json/test_ujson.py b/pandas/io/tests/json/test_ujson.py
index 8e4b492c984f1..13b2dafec9c89 100644
--- a/pandas/io/tests/json/test_ujson.py
+++ b/pandas/io/tests/json/test_ujson.py
@@ -1201,19 +1201,19 @@ def testDataFrame(self):
# column indexed
outp = DataFrame(ujson.decode(ujson.encode(df)))
self.assertTrue((df == outp).values.all())
- tm.assert_numpy_array_equal(df.columns, outp.columns)
- tm.assert_numpy_array_equal(df.index, outp.index)
+ tm.assert_index_equal(df.columns, outp.columns)
+ tm.assert_index_equal(df.index, outp.index)
dec = _clean_dict(ujson.decode(ujson.encode(df, orient="split")))
outp = DataFrame(**dec)
self.assertTrue((df == outp).values.all())
- tm.assert_numpy_array_equal(df.columns, outp.columns)
- tm.assert_numpy_array_equal(df.index, outp.index)
+ tm.assert_index_equal(df.columns, outp.columns)
+ tm.assert_index_equal(df.index, outp.index)
outp = DataFrame(ujson.decode(ujson.encode(df, orient="records")))
outp.index = df.index
self.assertTrue((df == outp).values.all())
- tm.assert_numpy_array_equal(df.columns, outp.columns)
+ tm.assert_index_equal(df.columns, outp.columns)
outp = DataFrame(ujson.decode(ujson.encode(df, orient="values")))
outp.index = df.index
@@ -1221,8 +1221,8 @@ def testDataFrame(self):
outp = DataFrame(ujson.decode(ujson.encode(df, orient="index")))
self.assertTrue((df.transpose() == outp).values.all())
- tm.assert_numpy_array_equal(df.transpose().columns, outp.columns)
- tm.assert_numpy_array_equal(df.transpose().index, outp.index)
+ tm.assert_index_equal(df.transpose().columns, outp.columns)
+ tm.assert_index_equal(df.transpose().index, outp.index)
def testDataFrameNumpy(self):
df = DataFrame([[1, 2, 3], [4, 5, 6]], index=[
@@ -1231,21 +1231,21 @@ def testDataFrameNumpy(self):
# column indexed
outp = DataFrame(ujson.decode(ujson.encode(df), numpy=True))
self.assertTrue((df == outp).values.all())
- tm.assert_numpy_array_equal(df.columns, outp.columns)
- tm.assert_numpy_array_equal(df.index, outp.index)
+ tm.assert_index_equal(df.columns, outp.columns)
+ tm.assert_index_equal(df.index, outp.index)
dec = _clean_dict(ujson.decode(ujson.encode(df, orient="split"),
numpy=True))
outp = DataFrame(**dec)
self.assertTrue((df == outp).values.all())
- tm.assert_numpy_array_equal(df.columns, outp.columns)
- tm.assert_numpy_array_equal(df.index, outp.index)
+ tm.assert_index_equal(df.columns, outp.columns)
+ tm.assert_index_equal(df.index, outp.index)
- outp = DataFrame(ujson.decode(
- ujson.encode(df, orient="index"), numpy=True))
+ outp = DataFrame(ujson.decode(ujson.encode(df, orient="index"),
+ numpy=True))
self.assertTrue((df.transpose() == outp).values.all())
- tm.assert_numpy_array_equal(df.transpose().columns, outp.columns)
- tm.assert_numpy_array_equal(df.transpose().index, outp.index)
+ tm.assert_index_equal(df.transpose().columns, outp.columns)
+ tm.assert_index_equal(df.transpose().index, outp.index)
def testDataFrameNested(self):
df = DataFrame([[1, 2, 3], [4, 5, 6]], index=[
@@ -1285,20 +1285,20 @@ def testDataFrameNumpyLabelled(self):
outp = DataFrame(*ujson.decode(ujson.encode(df),
numpy=True, labelled=True))
self.assertTrue((df.T == outp).values.all())
- tm.assert_numpy_array_equal(df.T.columns, outp.columns)
- tm.assert_numpy_array_equal(df.T.index, outp.index)
+ tm.assert_index_equal(df.T.columns, outp.columns)
+ tm.assert_index_equal(df.T.index, outp.index)
outp = DataFrame(*ujson.decode(ujson.encode(df, orient="records"),
numpy=True, labelled=True))
outp.index = df.index
self.assertTrue((df == outp).values.all())
- tm.assert_numpy_array_equal(df.columns, outp.columns)
+ tm.assert_index_equal(df.columns, outp.columns)
outp = DataFrame(*ujson.decode(ujson.encode(df, orient="index"),
numpy=True, labelled=True))
self.assertTrue((df == outp).values.all())
- tm.assert_numpy_array_equal(df.columns, outp.columns)
- tm.assert_numpy_array_equal(df.index, outp.index)
+ tm.assert_index_equal(df.columns, outp.columns)
+ tm.assert_index_equal(df.index, outp.index)
def testSeries(self):
s = Series([10, 20, 30, 40, 50, 60], name="series",
@@ -1378,42 +1378,46 @@ def testIndex(self):
i = Index([23, 45, 18, 98, 43, 11], name="index")
# column indexed
- outp = Index(ujson.decode(ujson.encode(i)))
- self.assertTrue(i.equals(outp))
+ outp = Index(ujson.decode(ujson.encode(i)), name='index')
+ tm.assert_index_equal(i, outp)
- outp = Index(ujson.decode(ujson.encode(i), numpy=True))
- self.assertTrue(i.equals(outp))
+ outp = Index(ujson.decode(ujson.encode(i), numpy=True), name='index')
+ tm.assert_index_equal(i, outp)
dec = _clean_dict(ujson.decode(ujson.encode(i, orient="split")))
outp = Index(**dec)
- self.assertTrue(i.equals(outp))
+ tm.assert_index_equal(i, outp)
self.assertTrue(i.name == outp.name)
dec = _clean_dict(ujson.decode(ujson.encode(i, orient="split"),
numpy=True))
outp = Index(**dec)
- self.assertTrue(i.equals(outp))
+ tm.assert_index_equal(i, outp)
self.assertTrue(i.name == outp.name)
- outp = Index(ujson.decode(ujson.encode(i, orient="values")))
- self.assertTrue(i.equals(outp))
+ outp = Index(ujson.decode(ujson.encode(i, orient="values")),
+ name='index')
+ tm.assert_index_equal(i, outp)
- outp = Index(ujson.decode(ujson.encode(
- i, orient="values"), numpy=True))
- self.assertTrue(i.equals(outp))
+ outp = Index(ujson.decode(ujson.encode(i, orient="values"),
+ numpy=True), name='index')
+ tm.assert_index_equal(i, outp)
- outp = Index(ujson.decode(ujson.encode(i, orient="records")))
- self.assertTrue(i.equals(outp))
+ outp = Index(ujson.decode(ujson.encode(i, orient="records")),
+ name='index')
+ tm.assert_index_equal(i, outp)
- outp = Index(ujson.decode(ujson.encode(
- i, orient="records"), numpy=True))
- self.assertTrue(i.equals(outp))
+ outp = Index(ujson.decode(ujson.encode(i, orient="records"),
+ numpy=True), name='index')
+ tm.assert_index_equal(i, outp)
- outp = Index(ujson.decode(ujson.encode(i, orient="index")))
- self.assertTrue(i.equals(outp))
+ outp = Index(ujson.decode(ujson.encode(i, orient="index")),
+ name='index')
+ tm.assert_index_equal(i, outp)
- outp = Index(ujson.decode(ujson.encode(i, orient="index"), numpy=True))
- self.assertTrue(i.equals(outp))
+ outp = Index(ujson.decode(ujson.encode(i, orient="index"),
+ numpy=True), name='index')
+ tm.assert_index_equal(i, outp)
def test_datetimeindex(self):
from pandas.tseries.index import date_range
@@ -1423,7 +1427,7 @@ def test_datetimeindex(self):
encoded = ujson.encode(rng, date_unit='ns')
decoded = DatetimeIndex(np.array(ujson.decode(encoded)))
- self.assertTrue(rng.equals(decoded))
+ tm.assert_index_equal(rng, decoded)
ts = Series(np.random.randn(len(rng)), index=rng)
decoded = Series(ujson.decode(ujson.encode(ts, date_unit='ns')))
diff --git a/pandas/io/tests/parser/comment.py b/pandas/io/tests/parser/comment.py
index 07fc6a167a6c0..f7cd1e190ec16 100644
--- a/pandas/io/tests/parser/comment.py
+++ b/pandas/io/tests/parser/comment.py
@@ -19,14 +19,14 @@ def test_comment(self):
1,2.,4.#hello world
5.,NaN,10.0
"""
- expected = [[1., 2., 4.],
- [5., np.nan, 10.]]
+ expected = np.array([[1., 2., 4.],
+ [5., np.nan, 10.]])
df = self.read_csv(StringIO(data), comment='#')
- tm.assert_almost_equal(df.values, expected)
+ tm.assert_numpy_array_equal(df.values, expected)
df = self.read_table(StringIO(data), sep=',', comment='#',
na_values=['NaN'])
- tm.assert_almost_equal(df.values, expected)
+ tm.assert_numpy_array_equal(df.values, expected)
def test_line_comment(self):
data = """# empty
@@ -35,10 +35,10 @@ def test_line_comment(self):
#ignore this line
5.,NaN,10.0
"""
- expected = [[1., 2., 4.],
- [5., np.nan, 10.]]
+ expected = np.array([[1., 2., 4.],
+ [5., np.nan, 10.]])
df = self.read_csv(StringIO(data), comment='#')
- tm.assert_almost_equal(df.values, expected)
+ tm.assert_numpy_array_equal(df.values, expected)
# check with delim_whitespace=True
df = self.read_csv(StringIO(data.replace(',', ' ')), comment='#',
@@ -48,11 +48,11 @@ def test_line_comment(self):
# custom line terminator is not supported
# with the Python parser yet
if self.engine == 'c':
- expected = [[1., 2., 4.],
- [5., np.nan, 10.]]
+ expected = np.array([[1., 2., 4.],
+ [5., np.nan, 10.]])
df = self.read_csv(StringIO(data.replace('\n', '*')),
comment='#', lineterminator='*')
- tm.assert_almost_equal(df.values, expected)
+ tm.assert_numpy_array_equal(df.values, expected)
def test_comment_skiprows(self):
data = """# empty
@@ -64,9 +64,9 @@ def test_comment_skiprows(self):
5.,NaN,10.0
"""
# this should ignore the first four lines (including comments)
- expected = [[1., 2., 4.], [5., np.nan, 10.]]
+ expected = np.array([[1., 2., 4.], [5., np.nan, 10.]])
df = self.read_csv(StringIO(data), comment='#', skiprows=4)
- tm.assert_almost_equal(df.values, expected)
+ tm.assert_numpy_array_equal(df.values, expected)
def test_comment_header(self):
data = """# empty
@@ -77,9 +77,9 @@ def test_comment_header(self):
5.,NaN,10.0
"""
# header should begin at the second non-comment line
- expected = [[1., 2., 4.], [5., np.nan, 10.]]
+ expected = np.array([[1., 2., 4.], [5., np.nan, 10.]])
df = self.read_csv(StringIO(data), comment='#', header=1)
- tm.assert_almost_equal(df.values, expected)
+ tm.assert_numpy_array_equal(df.values, expected)
def test_comment_skiprows_header(self):
data = """# empty
@@ -94,9 +94,9 @@ def test_comment_skiprows_header(self):
# skiprows should skip the first 4 lines (including comments), while
# header should start from the second non-commented line starting
# with line 5
- expected = [[1., 2., 4.], [5., np.nan, 10.]]
+ expected = np.array([[1., 2., 4.], [5., np.nan, 10.]])
df = self.read_csv(StringIO(data), comment='#', skiprows=4, header=1)
- tm.assert_almost_equal(df.values, expected)
+ tm.assert_numpy_array_equal(df.values, expected)
def test_custom_comment_char(self):
data = "a,b,c\n1,2,3#ignore this!\n4,5,6#ignorethistoo"
diff --git a/pandas/io/tests/parser/common.py b/pandas/io/tests/parser/common.py
index 2be0c4edb8f5d..14f4de853e118 100644
--- a/pandas/io/tests/parser/common.py
+++ b/pandas/io/tests/parser/common.py
@@ -232,14 +232,14 @@ def test_unnamed_columns(self):
6,7,8,9,10
11,12,13,14,15
"""
- expected = [[1, 2, 3, 4, 5.],
- [6, 7, 8, 9, 10],
- [11, 12, 13, 14, 15]]
+ expected = np.array([[1, 2, 3, 4, 5],
+ [6, 7, 8, 9, 10],
+ [11, 12, 13, 14, 15]], dtype=np.int64)
df = self.read_table(StringIO(data), sep=',')
tm.assert_almost_equal(df.values, expected)
- self.assert_numpy_array_equal(df.columns,
- ['A', 'B', 'C', 'Unnamed: 3',
- 'Unnamed: 4'])
+ self.assert_index_equal(df.columns,
+ Index(['A', 'B', 'C', 'Unnamed: 3',
+ 'Unnamed: 4']))
def test_duplicate_columns(self):
# TODO: add test for condition 'mangle_dupe_cols=False'
@@ -275,7 +275,7 @@ def test_read_csv_dataframe(self):
df = self.read_csv(self.csv1, index_col=0, parse_dates=True)
df2 = self.read_table(self.csv1, sep=',', index_col=0,
parse_dates=True)
- self.assert_numpy_array_equal(df.columns, ['A', 'B', 'C', 'D'])
+ self.assert_index_equal(df.columns, pd.Index(['A', 'B', 'C', 'D']))
self.assertEqual(df.index.name, 'index')
self.assertIsInstance(
df.index[0], (datetime, np.datetime64, Timestamp))
@@ -286,12 +286,12 @@ def test_read_csv_no_index_name(self):
df = self.read_csv(self.csv2, index_col=0, parse_dates=True)
df2 = self.read_table(self.csv2, sep=',', index_col=0,
parse_dates=True)
- self.assert_numpy_array_equal(df.columns, ['A', 'B', 'C', 'D', 'E'])
- self.assertIsInstance(
- df.index[0], (datetime, np.datetime64, Timestamp))
- self.assertEqual(df.ix[
- :, ['A', 'B', 'C', 'D']
- ].values.dtype, np.float64)
+ self.assert_index_equal(df.columns,
+ pd.Index(['A', 'B', 'C', 'D', 'E']))
+ self.assertIsInstance(df.index[0],
+ (datetime, np.datetime64, Timestamp))
+ self.assertEqual(df.ix[:, ['A', 'B', 'C', 'D']].values.dtype,
+ np.float64)
tm.assert_frame_equal(df, df2)
def test_read_table_unicode(self):
@@ -1121,21 +1121,21 @@ def test_empty_lines(self):
-70,.4,1
"""
- expected = [[1., 2., 4.],
- [5., np.nan, 10.],
- [-70., .4, 1.]]
+ expected = np.array([[1., 2., 4.],
+ [5., np.nan, 10.],
+ [-70., .4, 1.]])
df = self.read_csv(StringIO(data))
- tm.assert_almost_equal(df.values, expected)
+ tm.assert_numpy_array_equal(df.values, expected)
df = self.read_csv(StringIO(data.replace(',', ' ')), sep='\s+')
- tm.assert_almost_equal(df.values, expected)
- expected = [[1., 2., 4.],
- [np.nan, np.nan, np.nan],
- [np.nan, np.nan, np.nan],
- [5., np.nan, 10.],
- [np.nan, np.nan, np.nan],
- [-70., .4, 1.]]
+ tm.assert_numpy_array_equal(df.values, expected)
+ expected = np.array([[1., 2., 4.],
+ [np.nan, np.nan, np.nan],
+ [np.nan, np.nan, np.nan],
+ [5., np.nan, 10.],
+ [np.nan, np.nan, np.nan],
+ [-70., .4, 1.]])
df = self.read_csv(StringIO(data), skip_blank_lines=False)
- tm.assert_almost_equal(list(df.values), list(expected))
+ tm.assert_numpy_array_equal(df.values, expected)
def test_whitespace_lines(self):
data = """
@@ -1146,10 +1146,10 @@ def test_whitespace_lines(self):
\t 1,2.,4.
5.,NaN,10.0
"""
- expected = [[1, 2., 4.],
- [5., np.nan, 10.]]
+ expected = np.array([[1, 2., 4.],
+ [5., np.nan, 10.]])
df = self.read_csv(StringIO(data))
- tm.assert_almost_equal(df.values, expected)
+ tm.assert_numpy_array_equal(df.values, expected)
def test_regex_separator(self):
# see gh-6607
diff --git a/pandas/io/tests/parser/header.py b/pandas/io/tests/parser/header.py
index e3c408f0af907..ca148b373d659 100644
--- a/pandas/io/tests/parser/header.py
+++ b/pandas/io/tests/parser/header.py
@@ -43,14 +43,14 @@ def test_no_header_prefix(self):
df_pref = self.read_table(StringIO(data), sep=',', prefix='Field',
header=None)
- expected = [[1, 2, 3, 4, 5.],
- [6, 7, 8, 9, 10],
- [11, 12, 13, 14, 15]]
+ expected = np.array([[1, 2, 3, 4, 5],
+ [6, 7, 8, 9, 10],
+ [11, 12, 13, 14, 15]], dtype=np.int64)
tm.assert_almost_equal(df_pref.values, expected)
- self.assert_numpy_array_equal(
- df_pref.columns, ['Field0', 'Field1', 'Field2',
- 'Field3', 'Field4'])
+ self.assert_index_equal(df_pref.columns,
+ Index(['Field0', 'Field1', 'Field2',
+ 'Field3', 'Field4']))
def test_header_with_index_col(self):
data = """foo,1,2,3
@@ -262,14 +262,14 @@ def test_no_header(self):
names = ['foo', 'bar', 'baz', 'quux', 'panda']
df2 = self.read_table(StringIO(data), sep=',', names=names)
- expected = [[1, 2, 3, 4, 5.],
- [6, 7, 8, 9, 10],
- [11, 12, 13, 14, 15]]
+ expected = np.array([[1, 2, 3, 4, 5],
+ [6, 7, 8, 9, 10],
+ [11, 12, 13, 14, 15]], dtype=np.int64)
tm.assert_almost_equal(df.values, expected)
tm.assert_almost_equal(df.values, df2.values)
- self.assert_numpy_array_equal(df_pref.columns,
- ['X0', 'X1', 'X2', 'X3', 'X4'])
- self.assert_numpy_array_equal(df.columns, lrange(5))
+ self.assert_index_equal(df_pref.columns,
+ Index(['X0', 'X1', 'X2', 'X3', 'X4']))
+ self.assert_index_equal(df.columns, Index(lrange(5)))
- self.assert_numpy_array_equal(df2.columns, names)
+ self.assert_index_equal(df2.columns, Index(names))
diff --git a/pandas/io/tests/parser/na_values.py b/pandas/io/tests/parser/na_values.py
index 853e6242751c9..c34549835cb46 100644
--- a/pandas/io/tests/parser/na_values.py
+++ b/pandas/io/tests/parser/na_values.py
@@ -37,9 +37,10 @@ def test_detect_string_na(self):
NA,baz
NaN,nan
"""
- expected = [['foo', 'bar'], [nan, 'baz'], [nan, nan]]
+ expected = np.array([['foo', 'bar'], [nan, 'baz'], [nan, nan]],
+ dtype=np.object_)
df = self.read_csv(StringIO(data))
- tm.assert_almost_equal(df.values, expected)
+ tm.assert_numpy_array_equal(df.values, expected)
def test_non_string_na_values(self):
# see gh-3611, na_values that are not a string are an issue
@@ -126,20 +127,20 @@ def test_custom_na_values(self):
-1.#IND,5,baz
7,8,NaN
"""
- expected = [[1., nan, 3],
- [nan, 5, nan],
- [7, 8, nan]]
+ expected = np.array([[1., nan, 3],
+ [nan, 5, nan],
+ [7, 8, nan]])
df = self.read_csv(StringIO(data), na_values=['baz'], skiprows=[1])
- tm.assert_almost_equal(df.values, expected)
+ tm.assert_numpy_array_equal(df.values, expected)
df2 = self.read_table(StringIO(data), sep=',', na_values=['baz'],
skiprows=[1])
- tm.assert_almost_equal(df2.values, expected)
+ tm.assert_numpy_array_equal(df2.values, expected)
df3 = self.read_table(StringIO(data), sep=',', na_values='baz',
skiprows=[1])
- tm.assert_almost_equal(df3.values, expected)
+ tm.assert_numpy_array_equal(df3.values, expected)
def test_bool_na_values(self):
data = """A,B,C
diff --git a/pandas/io/tests/parser/python_parser_only.py b/pandas/io/tests/parser/python_parser_only.py
index 7d1793c429f4e..a08cb36c13f80 100644
--- a/pandas/io/tests/parser/python_parser_only.py
+++ b/pandas/io/tests/parser/python_parser_only.py
@@ -40,7 +40,8 @@ def test_sniff_delimiter(self):
baz|7|8|9
"""
data = self.read_csv(StringIO(text), index_col=0, sep=None)
- self.assertTrue(data.index.equals(Index(['foo', 'bar', 'baz'])))
+ self.assert_index_equal(data.index,
+ Index(['foo', 'bar', 'baz'], name='index'))
data2 = self.read_csv(StringIO(text), index_col=0, delimiter='|')
tm.assert_frame_equal(data, data2)
diff --git a/pandas/io/tests/parser/test_read_fwf.py b/pandas/io/tests/parser/test_read_fwf.py
index 5599188400368..11b10211650d6 100644
--- a/pandas/io/tests/parser/test_read_fwf.py
+++ b/pandas/io/tests/parser/test_read_fwf.py
@@ -217,8 +217,8 @@ def test_comment_fwf(self):
1 2. 4 #hello world
5 NaN 10.0
"""
- expected = [[1, 2., 4],
- [5, np.nan, 10.]]
+ expected = np.array([[1, 2., 4],
+ [5, np.nan, 10.]])
df = read_fwf(StringIO(data), colspecs=[(0, 3), (4, 9), (9, 25)],
comment='#')
tm.assert_almost_equal(df.values, expected)
@@ -228,8 +228,8 @@ def test_1000_fwf(self):
1 2,334.0 5
10 13 10.
"""
- expected = [[1, 2334., 5],
- [10, 13, 10]]
+ expected = np.array([[1, 2334., 5],
+ [10, 13, 10]])
df = read_fwf(StringIO(data), colspecs=[(0, 3), (3, 11), (12, 16)],
thousands=',')
tm.assert_almost_equal(df.values, expected)
diff --git a/pandas/io/tests/parser/test_textreader.py b/pandas/io/tests/parser/test_textreader.py
index f3de604f1ec48..c35cfca7012d3 100644
--- a/pandas/io/tests/parser/test_textreader.py
+++ b/pandas/io/tests/parser/test_textreader.py
@@ -76,8 +76,12 @@ def test_skipinitialspace(self):
header=None)
result = reader.read()
- self.assert_numpy_array_equal(result[0], ['a', 'a', 'a', 'a'])
- self.assert_numpy_array_equal(result[1], ['b', 'b', 'b', 'b'])
+ self.assert_numpy_array_equal(result[0],
+ np.array(['a', 'a', 'a', 'a'],
+ dtype=np.object_))
+ self.assert_numpy_array_equal(result[1],
+ np.array(['b', 'b', 'b', 'b'],
+ dtype=np.object_))
def test_parse_booleans(self):
data = 'True\nFalse\nTrue\nTrue'
@@ -94,8 +98,10 @@ def test_delimit_whitespace(self):
header=None)
result = reader.read()
- self.assert_numpy_array_equal(result[0], ['a', 'a', 'a'])
- self.assert_numpy_array_equal(result[1], ['b', 'b', 'b'])
+ self.assert_numpy_array_equal(result[0], np.array(['a', 'a', 'a'],
+ dtype=np.object_))
+ self.assert_numpy_array_equal(result[1], np.array(['b', 'b', 'b'],
+ dtype=np.object_))
def test_embedded_newline(self):
data = 'a\n"hello\nthere"\nthis'
@@ -103,7 +109,7 @@ def test_embedded_newline(self):
reader = TextReader(StringIO(data), header=None)
result = reader.read()
- expected = ['a', 'hello\nthere', 'this']
+ expected = np.array(['a', 'hello\nthere', 'this'], dtype=np.object_)
self.assert_numpy_array_equal(result[0], expected)
def test_euro_decimal(self):
@@ -113,7 +119,7 @@ def test_euro_decimal(self):
decimal=',', header=None)
result = reader.read()
- expected = [12345.67, 345.678]
+ expected = np.array([12345.67, 345.678])
tm.assert_almost_equal(result[0], expected)
def test_integer_thousands(self):
@@ -123,7 +129,7 @@ def test_integer_thousands(self):
thousands=',', header=None)
result = reader.read()
- expected = [123456, 12500]
+ expected = np.array([123456, 12500], dtype=np.int64)
tm.assert_almost_equal(result[0], expected)
def test_integer_thousands_alt(self):
diff --git a/pandas/io/tests/test_html.py b/pandas/io/tests/test_html.py
index b056f34b5f00e..5a95fe7727df0 100644
--- a/pandas/io/tests/test_html.py
+++ b/pandas/io/tests/test_html.py
@@ -519,7 +519,7 @@ def test_nyse_wsj_commas_table(self):
'Volume', 'Price', 'Chg', '% Chg'])
nrows = 100
self.assertEqual(df.shape[0], nrows)
- self.assertTrue(df.columns.equals(columns))
+ self.assert_index_equal(df.columns, columns)
@tm.slow
def test_banklist_header(self):
diff --git a/pandas/io/tests/test_packers.py b/pandas/io/tests/test_packers.py
index 7c61a6942e8e7..b647ec6b25717 100644
--- a/pandas/io/tests/test_packers.py
+++ b/pandas/io/tests/test_packers.py
@@ -150,7 +150,11 @@ def test_scalar_complex(self):
def test_list_numpy_float(self):
x = [np.float32(np.random.rand()) for i in range(5)]
x_rec = self.encode_decode(x)
- tm.assert_almost_equal(x, x_rec)
+ # current msgpack cannot distinguish list/tuple
+ tm.assert_almost_equal(tuple(x), x_rec)
+
+ x_rec = self.encode_decode(tuple(x))
+ tm.assert_almost_equal(tuple(x), x_rec)
def test_list_numpy_float_complex(self):
if not hasattr(np, 'complex128'):
@@ -165,7 +169,11 @@ def test_list_numpy_float_complex(self):
def test_list_float(self):
x = [np.random.rand() for i in range(5)]
x_rec = self.encode_decode(x)
- tm.assert_almost_equal(x, x_rec)
+ # current msgpack cannot distinguish list/tuple
+ tm.assert_almost_equal(tuple(x), x_rec)
+
+ x_rec = self.encode_decode(tuple(x))
+ tm.assert_almost_equal(tuple(x), x_rec)
def test_list_float_complex(self):
x = [np.random.rand() for i in range(5)] + \
@@ -217,7 +225,11 @@ def test_numpy_array_complex(self):
def test_list_mixed(self):
x = [1.0, np.float32(3.5), np.complex128(4.25), u('foo')]
x_rec = self.encode_decode(x)
- tm.assert_almost_equal(x, x_rec)
+ # current msgpack cannot distinguish list/tuple
+ tm.assert_almost_equal(tuple(x), x_rec)
+
+ x_rec = self.encode_decode(tuple(x))
+ tm.assert_almost_equal(tuple(x), x_rec)
class TestBasic(TestPackers):
@@ -286,30 +298,30 @@ def test_basic_index(self):
for s, i in self.d.items():
i_rec = self.encode_decode(i)
- self.assertTrue(i.equals(i_rec))
+ self.assert_index_equal(i, i_rec)
# datetime with no freq (GH5506)
i = Index([Timestamp('20130101'), Timestamp('20130103')])
i_rec = self.encode_decode(i)
- self.assertTrue(i.equals(i_rec))
+ self.assert_index_equal(i, i_rec)
# datetime with timezone
i = Index([Timestamp('20130101 9:00:00'), Timestamp(
'20130103 11:00:00')]).tz_localize('US/Eastern')
i_rec = self.encode_decode(i)
- self.assertTrue(i.equals(i_rec))
+ self.assert_index_equal(i, i_rec)
def test_multi_index(self):
for s, i in self.mi.items():
i_rec = self.encode_decode(i)
- self.assertTrue(i.equals(i_rec))
+ self.assert_index_equal(i, i_rec)
def test_unicode(self):
i = tm.makeUnicodeIndex(100)
i_rec = self.encode_decode(i)
- self.assertTrue(i.equals(i_rec))
+ self.assert_index_equal(i, i_rec)
class TestSeries(TestPackers):
diff --git a/pandas/io/tests/test_pickle.py b/pandas/io/tests/test_pickle.py
index 7f2813d5281cb..c12d6e02e3a2e 100644
--- a/pandas/io/tests/test_pickle.py
+++ b/pandas/io/tests/test_pickle.py
@@ -85,7 +85,7 @@ def compare_series_ts(self, result, expected, typ, version):
tm.assert_series_equal(result, expected)
tm.assert_equal(result.index.freq, expected.index.freq)
tm.assert_equal(result.index.freq.normalize, False)
- tm.assert_numpy_array_equal(result > 0, expected > 0)
+ tm.assert_series_equal(result > 0, expected > 0)
# GH 9291
freq = result.index.freq
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 4c72a47dbdf6e..96b66265ea586 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -5280,7 +5280,7 @@ def test_fixed_offset_tz(self):
with ensure_clean_store(self.path) as store:
store['frame'] = frame
recons = store['frame']
- self.assertTrue(recons.index.equals(rng))
+ self.assert_index_equal(recons.index, rng)
self.assertEqual(rng.tz, recons.index.tz)
def test_store_timezone(self):
diff --git a/pandas/sparse/tests/test_frame.py b/pandas/sparse/tests/test_frame.py
index fde4ad15e1185..43d35a4e7f72e 100644
--- a/pandas/sparse/tests/test_frame.py
+++ b/pandas/sparse/tests/test_frame.py
@@ -97,8 +97,11 @@ def test_constructor(self):
# constructed zframe from matrix above
self.assertEqual(self.zframe['A'].fill_value, 0)
- tm.assert_almost_equal([0, 0, 0, 0, 1, 2, 3, 4, 5, 6],
- self.zframe['A'].values)
+ tm.assert_numpy_array_equal(pd.SparseArray([1., 2., 3., 4., 5., 6.]),
+ self.zframe['A'].values)
+ tm.assert_numpy_array_equal(np.array([0., 0., 0., 0., 1., 2.,
+ 3., 4., 5., 6.]),
+ self.zframe['A'].to_dense().values)
# construct no data
sdf = SparseDataFrame(columns=np.arange(10), index=np.arange(10))
@@ -380,8 +383,8 @@ def test_set_value(self):
res2 = res.set_value('foobar', 'qux', 1.5)
self.assertIsNot(res2, res)
- self.assert_numpy_array_equal(res2.columns,
- list(self.frame.columns) + ['qux'])
+ self.assert_index_equal(res2.columns,
+ pd.Index(list(self.frame.columns) + ['qux']))
self.assertEqual(res2.get_value('foobar', 'qux'), 1.5)
def test_fancy_index_misc(self):
@@ -407,7 +410,7 @@ def test_getitem_overload(self):
subindex = self.frame.index[indexer]
subframe = self.frame[indexer]
- self.assert_numpy_array_equal(subindex, subframe.index)
+ self.assert_index_equal(subindex, subframe.index)
self.assertRaises(Exception, self.frame.__getitem__, indexer[:-1])
def test_setitem(self):
diff --git a/pandas/sparse/tests/test_libsparse.py b/pandas/sparse/tests/test_libsparse.py
index 6edae66d4e55b..11bf980a99fec 100644
--- a/pandas/sparse/tests/test_libsparse.py
+++ b/pandas/sparse/tests/test_libsparse.py
@@ -50,8 +50,10 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
yindex = BlockIndex(TEST_LENGTH, yloc, ylen)
bresult = xindex.make_union(yindex)
assert (isinstance(bresult, BlockIndex))
- tm.assert_numpy_array_equal(bresult.blocs, eloc)
- tm.assert_numpy_array_equal(bresult.blengths, elen)
+ tm.assert_numpy_array_equal(bresult.blocs,
+ np.array(eloc, dtype=np.int32))
+ tm.assert_numpy_array_equal(bresult.blengths,
+ np.array(elen, dtype=np.int32))
ixindex = xindex.to_int_index()
iyindex = yindex.to_int_index()
@@ -411,7 +413,8 @@ def test_to_int_index(self):
block = BlockIndex(20, locs, lengths)
dense = block.to_int_index()
- tm.assert_numpy_array_equal(dense.indices, exp_inds)
+ tm.assert_numpy_array_equal(dense.indices,
+ np.array(exp_inds, dtype=np.int32))
def test_to_block_index(self):
index = BlockIndex(10, [0, 5], [4, 5])
diff --git a/pandas/sparse/tests/test_panel.py b/pandas/sparse/tests/test_panel.py
index 89a90f5be40e6..e988ddebd92f0 100644
--- a/pandas/sparse/tests/test_panel.py
+++ b/pandas/sparse/tests/test_panel.py
@@ -121,7 +121,8 @@ def _compare_with_dense(panel):
dlp = panel.to_dense().to_frame()
self.assert_numpy_array_equal(slp.values, dlp.values)
- self.assertTrue(slp.index.equals(dlp.index))
+ self.assert_index_equal(slp.index, dlp.index,
+ check_names=False)
_compare_with_dense(self.panel)
_compare_with_dense(self.panel.reindex(items=['ItemA']))
diff --git a/pandas/sparse/tests/test_series.py b/pandas/sparse/tests/test_series.py
index 58e3dfbdf66e4..27112319ea915 100644
--- a/pandas/sparse/tests/test_series.py
+++ b/pandas/sparse/tests/test_series.py
@@ -294,7 +294,7 @@ def test_constructor_ndarray(self):
def test_constructor_nonnan(self):
arr = [0, 0, 0, nan, nan]
sp_series = SparseSeries(arr, fill_value=0)
- tm.assert_numpy_array_equal(sp_series.values.values, arr)
+ tm.assert_numpy_array_equal(sp_series.values.values, np.array(arr))
self.assertEqual(len(sp_series), 5)
self.assertEqual(sp_series.shape, (5, ))
@@ -726,9 +726,9 @@ def test_dropna(self):
expected = sp.to_dense().valid()
expected = expected[expected != 0]
-
- tm.assert_almost_equal(sp_valid.values, expected.values)
- self.assertTrue(sp_valid.index.equals(expected.index))
+ exp_arr = pd.SparseArray(expected.values, fill_value=0, kind='block')
+ tm.assert_sp_array_equal(sp_valid.values, exp_arr)
+ self.assert_index_equal(sp_valid.index, expected.index)
self.assertEqual(len(sp_valid.sp_values), 2)
result = self.bseries.dropna()
@@ -1042,8 +1042,7 @@ def _run_test(self, ss, kwargs, check):
results = (results[0].T, results[2], results[1])
self._check_results_to_coo(results, check)
- @staticmethod
- def _check_results_to_coo(results, check):
+ def _check_results_to_coo(self, results, check):
(A, il, jl) = results
(A_result, il_result, jl_result) = check
# convert to dense and compare
@@ -1051,8 +1050,8 @@ def _check_results_to_coo(results, check):
# or compare directly as difference of sparse
# assert(abs(A - A_result).max() < 1e-12) # max is failing in python
# 2.6
- tm.assert_numpy_array_equal(il, il_result)
- tm.assert_numpy_array_equal(jl, jl_result)
+ self.assertEqual(il, il_result)
+ self.assertEqual(jl, jl_result)
def test_concat(self):
val1 = np.array([1, 2, np.nan, np.nan, 0, np.nan])
diff --git a/pandas/stats/tests/test_fama_macbeth.py b/pandas/stats/tests/test_fama_macbeth.py
index 2c69eb64fd61d..706becfa730c4 100644
--- a/pandas/stats/tests/test_fama_macbeth.py
+++ b/pandas/stats/tests/test_fama_macbeth.py
@@ -50,7 +50,9 @@ def checkFamaMacBethExtended(self, window_type, x, y, **kwds):
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
reference = fama_macbeth(y=y2, x=x2, **kwds)
- assert_almost_equal(reference._stats, result._stats[:, i])
+ # reference._stats is tuple
+ assert_almost_equal(reference._stats, result._stats[:, i],
+ check_dtype=False)
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
static = fama_macbeth(y=y2, x=x2, **kwds)
diff --git a/pandas/stats/tests/test_ols.py b/pandas/stats/tests/test_ols.py
index 4932ac8ffdf99..bac824f0b4840 100644
--- a/pandas/stats/tests/test_ols.py
+++ b/pandas/stats/tests/test_ols.py
@@ -378,7 +378,7 @@ def test_predict_longer_exog(self):
model = ols(y=endog, x=exog)
pred = model.y_predict
- self.assertTrue(pred.index.equals(exog.index))
+ self.assert_index_equal(pred.index, exog.index)
def test_longpanel_series_combo(self):
wp = tm.makePanel()
@@ -527,13 +527,12 @@ def testFiltering(self):
index = x.index.get_level_values(0)
index = Index(sorted(set(index)))
exp_index = Index([datetime(2000, 1, 1), datetime(2000, 1, 3)])
- self.assertTrue
- (exp_index.equals(index))
+ self.assert_index_equal(exp_index, index)
index = x.index.get_level_values(1)
index = Index(sorted(set(index)))
exp_index = Index(['A', 'B'])
- self.assertTrue(exp_index.equals(index))
+ self.assert_index_equal(exp_index, index)
x = result._x_filtered
index = x.index.get_level_values(0)
@@ -541,24 +540,22 @@ def testFiltering(self):
exp_index = Index([datetime(2000, 1, 1),
datetime(2000, 1, 3),
datetime(2000, 1, 4)])
- self.assertTrue(exp_index.equals(index))
+ self.assert_index_equal(exp_index, index)
- assert_almost_equal(result._y.values.flat, [1, 4, 5])
+ # .flat is flatiter instance
+ assert_almost_equal(result._y.values.flat, [1, 4, 5],
+ check_dtype=False)
- exp_x = [[6, 14, 1],
- [9, 17, 1],
- [30, 48, 1]]
+ exp_x = np.array([[6, 14, 1], [9, 17, 1],
+ [30, 48, 1]], dtype=np.float64)
assert_almost_equal(exp_x, result._x.values)
- exp_x_filtered = [[6, 14, 1],
- [9, 17, 1],
- [30, 48, 1],
- [11, 20, 1],
- [12, 21, 1]]
+ exp_x_filtered = np.array([[6, 14, 1], [9, 17, 1], [30, 48, 1],
+ [11, 20, 1], [12, 21, 1]], dtype=np.float64)
assert_almost_equal(exp_x_filtered, result._x_filtered.values)
- self.assertTrue(result._x_filtered.index.levels[0].equals(
- result.y_fitted.index))
+ self.assert_index_equal(result._x_filtered.index.levels[0],
+ result.y_fitted.index)
def test_wls_panel(self):
y = tm.makeTimeDataFrame()
@@ -597,9 +594,11 @@ def testWithTimeEffects(self):
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
result = ols(y=self.panel_y2, x=self.panel_x2, time_effects=True)
- assert_almost_equal(result._y_trans.values.flat, [0, -0.5, 0.5])
+ # .flat is flatiter instance
+ assert_almost_equal(result._y_trans.values.flat, [0, -0.5, 0.5],
+ check_dtype=False)
- exp_x = [[0, 0], [-10.5, -15.5], [10.5, 15.5]]
+ exp_x = np.array([[0, 0], [-10.5, -15.5], [10.5, 15.5]])
assert_almost_equal(result._x_trans.values, exp_x)
# _check_non_raw_results(result)
@@ -608,7 +607,9 @@ def testWithEntityEffects(self):
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
result = ols(y=self.panel_y2, x=self.panel_x2, entity_effects=True)
- assert_almost_equal(result._y.values.flat, [1, 4, 5])
+ # .flat is flatiter instance
+ assert_almost_equal(result._y.values.flat, [1, 4, 5],
+ check_dtype=False)
exp_x = DataFrame([[0., 6., 14., 1.], [0, 9, 17, 1], [1, 30, 48, 1]],
index=result._x.index, columns=['FE_B', 'x1', 'x2',
@@ -622,7 +623,9 @@ def testWithEntityEffectsAndDroppedDummies(self):
result = ols(y=self.panel_y2, x=self.panel_x2, entity_effects=True,
dropped_dummies={'entity': 'B'})
- assert_almost_equal(result._y.values.flat, [1, 4, 5])
+ # .flat is flatiter instance
+ assert_almost_equal(result._y.values.flat, [1, 4, 5],
+ check_dtype=False)
exp_x = DataFrame([[1., 6., 14., 1.], [1, 9, 17, 1], [0, 30, 48, 1]],
index=result._x.index, columns=['FE_A', 'x1', 'x2',
'intercept'],
@@ -634,7 +637,9 @@ def testWithXEffects(self):
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
result = ols(y=self.panel_y2, x=self.panel_x2, x_effects=['x1'])
- assert_almost_equal(result._y.values.flat, [1, 4, 5])
+ # .flat is flatiter instance
+ assert_almost_equal(result._y.values.flat, [1, 4, 5],
+ check_dtype=False)
res = result._x
exp_x = DataFrame([[0., 0., 14., 1.], [0, 1, 17, 1], [1, 0, 48, 1]],
@@ -648,7 +653,9 @@ def testWithXEffectsAndDroppedDummies(self):
dropped_dummies={'x1': 30})
res = result._x
- assert_almost_equal(result._y.values.flat, [1, 4, 5])
+ # .flat is flatiter instance
+ assert_almost_equal(result._y.values.flat, [1, 4, 5],
+ check_dtype=False)
exp_x = DataFrame([[1., 0., 14., 1.], [0, 1, 17, 1], [0, 0, 48, 1]],
columns=['x1_6', 'x1_9', 'x2', 'intercept'],
index=res.index, dtype=float)
@@ -660,13 +667,15 @@ def testWithXEffectsAndConversion(self):
result = ols(y=self.panel_y3, x=self.panel_x3,
x_effects=['x1', 'x2'])
- assert_almost_equal(result._y.values.flat, [1, 2, 3, 4])
- exp_x = [[0, 0, 0, 1, 1], [1, 0, 0, 0, 1], [0, 1, 1, 0, 1],
- [0, 0, 0, 1, 1]]
+ # .flat is flatiter instance
+ assert_almost_equal(result._y.values.flat, [1, 2, 3, 4],
+ check_dtype=False)
+ exp_x = np.array([[0, 0, 0, 1, 1], [1, 0, 0, 0, 1], [0, 1, 1, 0, 1],
+ [0, 0, 0, 1, 1]], dtype=np.float64)
assert_almost_equal(result._x.values, exp_x)
exp_index = Index(['x1_B', 'x1_C', 'x2_baz', 'x2_foo', 'intercept'])
- self.assertTrue(exp_index.equals(result._x.columns))
+ self.assert_index_equal(exp_index, result._x.columns)
# _check_non_raw_results(result)
@@ -674,14 +683,15 @@ def testWithXEffectsAndConversionAndDroppedDummies(self):
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
result = ols(y=self.panel_y3, x=self.panel_x3, x_effects=['x1', 'x2'],
dropped_dummies={'x2': 'foo'})
-
- assert_almost_equal(result._y.values.flat, [1, 2, 3, 4])
- exp_x = [[0, 0, 0, 0, 1], [1, 0, 1, 0, 1], [0, 1, 0, 1, 1],
- [0, 0, 0, 0, 1]]
+ # .flat is flatiter instance
+ assert_almost_equal(result._y.values.flat, [1, 2, 3, 4],
+ check_dtype=False)
+ exp_x = np.array([[0, 0, 0, 0, 1], [1, 0, 1, 0, 1], [0, 1, 0, 1, 1],
+ [0, 0, 0, 0, 1]], dtype=np.float64)
assert_almost_equal(result._x.values, exp_x)
exp_index = Index(['x1_B', 'x1_C', 'x2_bar', 'x2_baz', 'intercept'])
- self.assertTrue(exp_index.equals(result._x.columns))
+ self.assert_index_equal(exp_index, result._x.columns)
# _check_non_raw_results(result)
@@ -914,16 +924,21 @@ def setUp(self):
def testFilterWithSeriesRHS(self):
(lhs, rhs, weights, rhs_pre,
index, valid) = _filter_data(self.TS1, {'x1': self.TS2}, None)
- self.tsAssertEqual(self.TS1, lhs)
- self.tsAssertEqual(self.TS2[:3], rhs['x1'])
- self.tsAssertEqual(self.TS2, rhs_pre['x1'])
+ self.tsAssertEqual(self.TS1.astype(np.float64), lhs, check_names=False)
+ self.tsAssertEqual(self.TS2[:3].astype(np.float64), rhs['x1'],
+ check_names=False)
+ self.tsAssertEqual(self.TS2.astype(np.float64), rhs_pre['x1'],
+ check_names=False)
def testFilterWithSeriesRHS2(self):
(lhs, rhs, weights, rhs_pre,
index, valid) = _filter_data(self.TS2, {'x1': self.TS1}, None)
- self.tsAssertEqual(self.TS2[:3], lhs)
- self.tsAssertEqual(self.TS1, rhs['x1'])
- self.tsAssertEqual(self.TS1, rhs_pre['x1'])
+ self.tsAssertEqual(self.TS2[:3].astype(np.float64), lhs,
+ check_names=False)
+ self.tsAssertEqual(self.TS1.astype(np.float64), rhs['x1'],
+ check_names=False)
+ self.tsAssertEqual(self.TS1.astype(np.float64), rhs_pre['x1'],
+ check_names=False)
def testFilterWithSeriesRHS3(self):
(lhs, rhs, weights, rhs_pre,
@@ -931,32 +946,32 @@ def testFilterWithSeriesRHS3(self):
exp_lhs = self.TS3[2:3]
exp_rhs = self.TS4[2:3]
exp_rhs_pre = self.TS4[1:]
- self.tsAssertEqual(exp_lhs, lhs)
- self.tsAssertEqual(exp_rhs, rhs['x1'])
- self.tsAssertEqual(exp_rhs_pre, rhs_pre['x1'])
+ self.tsAssertEqual(exp_lhs, lhs, check_names=False)
+ self.tsAssertEqual(exp_rhs, rhs['x1'], check_names=False)
+ self.tsAssertEqual(exp_rhs_pre, rhs_pre['x1'], check_names=False)
def testFilterWithDataFrameRHS(self):
(lhs, rhs, weights, rhs_pre,
index, valid) = _filter_data(self.TS1, self.DF1, None)
- exp_lhs = self.TS1[1:]
+ exp_lhs = self.TS1[1:].astype(np.float64)
exp_rhs1 = self.TS2[1:3]
- exp_rhs2 = self.TS4[1:3]
- self.tsAssertEqual(exp_lhs, lhs)
- self.tsAssertEqual(exp_rhs1, rhs['x1'])
- self.tsAssertEqual(exp_rhs2, rhs['x2'])
+ exp_rhs2 = self.TS4[1:3].astype(np.float64)
+ self.tsAssertEqual(exp_lhs, lhs, check_names=False)
+ self.tsAssertEqual(exp_rhs1, rhs['x1'], check_names=False)
+ self.tsAssertEqual(exp_rhs2, rhs['x2'], check_names=False)
def testFilterWithDictRHS(self):
(lhs, rhs, weights, rhs_pre,
index, valid) = _filter_data(self.TS1, self.DICT1, None)
- exp_lhs = self.TS1[1:]
- exp_rhs1 = self.TS2[1:3]
- exp_rhs2 = self.TS4[1:3]
- self.tsAssertEqual(exp_lhs, lhs)
- self.tsAssertEqual(exp_rhs1, rhs['x1'])
- self.tsAssertEqual(exp_rhs2, rhs['x2'])
-
- def tsAssertEqual(self, ts1, ts2):
- self.assert_numpy_array_equal(ts1, ts2)
+ exp_lhs = self.TS1[1:].astype(np.float64)
+ exp_rhs1 = self.TS2[1:3].astype(np.float64)
+ exp_rhs2 = self.TS4[1:3].astype(np.float64)
+ self.tsAssertEqual(exp_lhs, lhs, check_names=False)
+ self.tsAssertEqual(exp_rhs1, rhs['x1'], check_names=False)
+ self.tsAssertEqual(exp_rhs2, rhs['x2'], check_names=False)
+
+ def tsAssertEqual(self, ts1, ts2, **kwargs):
+ self.assert_series_equal(ts1, ts2, **kwargs)
if __name__ == '__main__':
diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py
index 1da5487aefc01..3b50dd2c1d49f 100644
--- a/pandas/tests/frame/test_alter_axes.py
+++ b/pandas/tests/frame/test_alter_axes.py
@@ -330,28 +330,30 @@ def test_rename(self):
# gets sorted alphabetical
df = DataFrame(data)
renamed = df.rename(index={'foo': 'bar', 'bar': 'foo'})
- self.assert_numpy_array_equal(renamed.index, ['foo', 'bar'])
+ tm.assert_index_equal(renamed.index, pd.Index(['foo', 'bar']))
renamed = df.rename(index=str.upper)
- self.assert_numpy_array_equal(renamed.index, ['BAR', 'FOO'])
+ tm.assert_index_equal(renamed.index, pd.Index(['BAR', 'FOO']))
# have to pass something
self.assertRaises(TypeError, self.frame.rename)
# partial columns
renamed = self.frame.rename(columns={'C': 'foo', 'D': 'bar'})
- self.assert_numpy_array_equal(
- renamed.columns, ['A', 'B', 'foo', 'bar'])
+ tm.assert_index_equal(renamed.columns,
+ pd.Index(['A', 'B', 'foo', 'bar']))
# other axis
renamed = self.frame.T.rename(index={'C': 'foo', 'D': 'bar'})
- self.assert_numpy_array_equal(renamed.index, ['A', 'B', 'foo', 'bar'])
+ tm.assert_index_equal(renamed.index,
+ pd.Index(['A', 'B', 'foo', 'bar']))
# index with name
index = Index(['foo', 'bar'], name='name')
renamer = DataFrame(data, index=index)
renamed = renamer.rename(index={'foo': 'bar', 'bar': 'foo'})
- self.assert_numpy_array_equal(renamed.index, ['bar', 'foo'])
+ tm.assert_index_equal(renamed.index,
+ pd.Index(['bar', 'foo'], name='name'))
self.assertEqual(renamed.index.name, renamer.index.name)
# MultiIndex
@@ -363,12 +365,14 @@ def test_rename(self):
renamer = DataFrame([(0, 0), (1, 1)], index=index, columns=columns)
renamed = renamer.rename(index={'foo1': 'foo3', 'bar2': 'bar3'},
columns={'fizz1': 'fizz3', 'buzz2': 'buzz3'})
- new_index = MultiIndex.from_tuples(
- [('foo3', 'bar1'), ('foo2', 'bar3')])
- new_columns = MultiIndex.from_tuples(
- [('fizz3', 'buzz1'), ('fizz2', 'buzz3')])
- self.assert_numpy_array_equal(renamed.index, new_index)
- self.assert_numpy_array_equal(renamed.columns, new_columns)
+ new_index = MultiIndex.from_tuples([('foo3', 'bar1'),
+ ('foo2', 'bar3')],
+ names=['foo', 'bar'])
+ new_columns = MultiIndex.from_tuples([('fizz3', 'buzz1'),
+ ('fizz2', 'buzz3')],
+ names=['fizz', 'buzz'])
+ self.assert_index_equal(renamed.index, new_index)
+ self.assert_index_equal(renamed.columns, new_columns)
self.assertEqual(renamed.index.names, renamer.index.names)
self.assertEqual(renamed.columns.names, renamer.columns.names)
@@ -460,28 +464,30 @@ def test_reset_index(self):
stacked.index.names = [None, None]
deleveled2 = stacked.reset_index()
- self.assert_numpy_array_equal(deleveled['first'],
- deleveled2['level_0'])
- self.assert_numpy_array_equal(deleveled['second'],
- deleveled2['level_1'])
+ tm.assert_series_equal(deleveled['first'], deleveled2['level_0'],
+ check_names=False)
+ tm.assert_series_equal(deleveled['second'], deleveled2['level_1'],
+ check_names=False)
# default name assigned
rdf = self.frame.reset_index()
- self.assert_numpy_array_equal(rdf['index'], self.frame.index.values)
+ exp = pd.Series(self.frame.index.values, name='index')
+ self.assert_series_equal(rdf['index'], exp)
# default name assigned, corner case
df = self.frame.copy()
df['index'] = 'foo'
rdf = df.reset_index()
- self.assert_numpy_array_equal(rdf['level_0'], self.frame.index.values)
+ exp = pd.Series(self.frame.index.values, name='level_0')
+ self.assert_series_equal(rdf['level_0'], exp)
# but this is ok
self.frame.index.name = 'index'
deleveled = self.frame.reset_index()
- self.assert_numpy_array_equal(deleveled['index'],
- self.frame.index.values)
- self.assert_numpy_array_equal(deleveled.index,
- np.arange(len(deleveled)))
+ self.assert_series_equal(deleveled['index'],
+ pd.Series(self.frame.index))
+ self.assert_index_equal(deleveled.index,
+ pd.Index(np.arange(len(deleveled))))
# preserve column names
self.frame.columns.name = 'columns'
diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index 20aaae586f14f..b71235a8f6576 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -18,12 +18,6 @@
import pandas.core.nanops as nanops
import pandas.formats.printing as printing
-from pandas.util.testing import (assert_almost_equal,
- assert_equal,
- assert_series_equal,
- assert_frame_equal,
- assertRaisesRegexp)
-
import pandas.util.testing as tm
from pandas.tests.frame.common import TestData
@@ -60,12 +54,12 @@ def _check_method(self, method='pearson', check_minp=False):
if not check_minp:
correls = self.frame.corr(method=method)
exp = self.frame['A'].corr(self.frame['C'], method=method)
- assert_almost_equal(correls['A']['C'], exp)
+ tm.assert_almost_equal(correls['A']['C'], exp)
else:
result = self.frame.corr(min_periods=len(self.frame) - 8)
expected = self.frame.corr()
expected.ix['A', 'B'] = expected.ix['B', 'A'] = nan
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_corr_non_numeric(self):
tm._skip_if_no_scipy()
@@ -75,7 +69,7 @@ def test_corr_non_numeric(self):
# exclude non-numeric types
result = self.mixed_frame.corr()
expected = self.mixed_frame.ix[:, ['A', 'B', 'C', 'D']].corr()
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_corr_nooverlap(self):
tm._skip_if_no_scipy()
@@ -123,14 +117,14 @@ def test_corr_int_and_boolean(self):
expected = DataFrame(np.ones((2, 2)), index=[
'a', 'b'], columns=['a', 'b'])
for meth in ['pearson', 'kendall', 'spearman']:
- assert_frame_equal(df.corr(meth), expected)
+ tm.assert_frame_equal(df.corr(meth), expected)
def test_cov(self):
# min_periods no NAs (corner case)
expected = self.frame.cov()
result = self.frame.cov(min_periods=len(self.frame))
- assert_frame_equal(expected, result)
+ tm.assert_frame_equal(expected, result)
result = self.frame.cov(min_periods=len(self.frame) + 1)
self.assertTrue(isnull(result.values).all())
@@ -149,25 +143,25 @@ def test_cov(self):
self.frame['B'][:10] = nan
cov = self.frame.cov()
- assert_almost_equal(cov['A']['C'],
- self.frame['A'].cov(self.frame['C']))
+ tm.assert_almost_equal(cov['A']['C'],
+ self.frame['A'].cov(self.frame['C']))
# exclude non-numeric types
result = self.mixed_frame.cov()
expected = self.mixed_frame.ix[:, ['A', 'B', 'C', 'D']].cov()
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# Single column frame
df = DataFrame(np.linspace(0.0, 1.0, 10))
result = df.cov()
expected = DataFrame(np.cov(df.values.T).reshape((1, 1)),
index=df.columns, columns=df.columns)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
df.ix[0] = np.nan
result = df.cov()
expected = DataFrame(np.cov(df.values[1:].T).reshape((1, 1)),
index=df.columns, columns=df.columns)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_corrwith(self):
a = self.tsframe
@@ -180,13 +174,13 @@ def test_corrwith(self):
del b['B']
colcorr = a.corrwith(b, axis=0)
- assert_almost_equal(colcorr['A'], a['A'].corr(b['A']))
+ tm.assert_almost_equal(colcorr['A'], a['A'].corr(b['A']))
rowcorr = a.corrwith(b, axis=1)
- assert_series_equal(rowcorr, a.T.corrwith(b.T, axis=0))
+ tm.assert_series_equal(rowcorr, a.T.corrwith(b.T, axis=0))
dropped = a.corrwith(b, axis=0, drop=True)
- assert_almost_equal(dropped['A'], a['A'].corr(b['A']))
+ tm.assert_almost_equal(dropped['A'], a['A'].corr(b['A']))
self.assertNotIn('B', dropped)
dropped = a.corrwith(b, axis=1, drop=True)
@@ -199,7 +193,7 @@ def test_corrwith(self):
df2 = DataFrame(randn(4, 4), index=index[:4], columns=columns)
correls = df1.corrwith(df2, axis=1)
for row in index[:4]:
- assert_almost_equal(correls[row], df1.ix[row].corr(df2.ix[row]))
+ tm.assert_almost_equal(correls[row], df1.ix[row].corr(df2.ix[row]))
def test_corrwith_with_objects(self):
df1 = tm.makeTimeDataFrame()
@@ -211,17 +205,17 @@ def test_corrwith_with_objects(self):
result = df1.corrwith(df2)
expected = df1.ix[:, cols].corrwith(df2.ix[:, cols])
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
result = df1.corrwith(df2, axis=1)
expected = df1.ix[:, cols].corrwith(df2.ix[:, cols], axis=1)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_corrwith_series(self):
result = self.tsframe.corrwith(self.tsframe['A'])
expected = self.tsframe.apply(self.tsframe['A'].corr)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_corrwith_matches_corrcoef(self):
df1 = DataFrame(np.arange(10000), columns=['a'])
@@ -229,7 +223,7 @@ def test_corrwith_matches_corrcoef(self):
c1 = df1.corrwith(df2)['a']
c2 = np.corrcoef(df1['a'], df2['a'])[0][1]
- assert_almost_equal(c1, c2)
+ tm.assert_almost_equal(c1, c2)
self.assertTrue(c1 < 1)
def test_bool_describe_in_mixed_frame(self):
@@ -246,14 +240,14 @@ def test_bool_describe_in_mixed_frame(self):
10, 20, 30, 40, 50]},
index=['count', 'mean', 'std', 'min', '25%',
'50%', '75%', 'max'])
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# Top value is a boolean value that is False
result = df.describe(include=['bool'])
expected = DataFrame({'bool_data': [5, 2, False, 3]},
index=['count', 'unique', 'top', 'freq'])
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_describe_categorical_columns(self):
# GH 11558
@@ -310,8 +304,9 @@ def test_reduce_mixed_frame(self):
})
df.reindex(columns=['bool_data', 'int_data', 'string_data'])
test = df.sum(axis=0)
- assert_almost_equal(test.values, [2, 150, 'abcde'])
- assert_series_equal(test, df.T.sum(axis=1))
+ tm.assert_numpy_array_equal(test.values,
+ np.array([2, 150, 'abcde'], dtype=object))
+ tm.assert_series_equal(test, df.T.sum(axis=1))
def test_count(self):
f = lambda s: notnull(s).sum()
@@ -333,17 +328,17 @@ def test_count(self):
df = DataFrame(index=lrange(10))
result = df.count(1)
expected = Series(0, index=df.index)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
df = DataFrame(columns=lrange(10))
result = df.count(0)
expected = Series(0, index=df.columns)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
df = DataFrame()
result = df.count()
expected = Series(0, index=[])
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_sum(self):
self._check_stat_op('sum', np.sum, has_numeric_only=True)
@@ -377,7 +372,7 @@ def test_stat_operators_attempt_obj_array(self):
expected = getattr(df.astype('f8'), meth)(1)
if not tm._incompat_bottleneck_version(meth):
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_mean(self):
self._check_stat_op('mean', np.mean, check_dates=True)
@@ -405,12 +400,12 @@ def test_cummin(self):
# axis = 0
cummin = self.tsframe.cummin()
expected = self.tsframe.apply(Series.cummin)
- assert_frame_equal(cummin, expected)
+ tm.assert_frame_equal(cummin, expected)
# axis = 1
cummin = self.tsframe.cummin(axis=1)
expected = self.tsframe.apply(Series.cummin, axis=1)
- assert_frame_equal(cummin, expected)
+ tm.assert_frame_equal(cummin, expected)
# it works
df = DataFrame({'A': np.arange(20)}, index=np.arange(20))
@@ -428,12 +423,12 @@ def test_cummax(self):
# axis = 0
cummax = self.tsframe.cummax()
expected = self.tsframe.apply(Series.cummax)
- assert_frame_equal(cummax, expected)
+ tm.assert_frame_equal(cummax, expected)
# axis = 1
cummax = self.tsframe.cummax(axis=1)
expected = self.tsframe.apply(Series.cummax, axis=1)
- assert_frame_equal(cummax, expected)
+ tm.assert_frame_equal(cummax, expected)
# it works
df = DataFrame({'A': np.arange(20)}, index=np.arange(20))
@@ -460,11 +455,11 @@ def test_var_std(self):
result = self.tsframe.std(ddof=4)
expected = self.tsframe.apply(lambda x: x.std(ddof=4))
- assert_almost_equal(result, expected)
+ tm.assert_almost_equal(result, expected)
result = self.tsframe.var(ddof=4)
expected = self.tsframe.apply(lambda x: x.var(ddof=4))
- assert_almost_equal(result, expected)
+ tm.assert_almost_equal(result, expected)
arr = np.repeat(np.random.random((1, 1000)), 1000, 0)
result = nanops.nanvar(arr, axis=0)
@@ -489,11 +484,11 @@ def test_numeric_only_flag(self):
for meth in methods:
result = getattr(df1, meth)(axis=1, numeric_only=True)
expected = getattr(df1[['bar', 'baz']], meth)(axis=1)
- assert_series_equal(expected, result)
+ tm.assert_series_equal(expected, result)
result = getattr(df2, meth)(axis=1, numeric_only=True)
expected = getattr(df2[['bar', 'baz']], meth)(axis=1)
- assert_series_equal(expected, result)
+ tm.assert_series_equal(expected, result)
# df1 has all numbers, df2 has a letter inside
self.assertRaises(TypeError, lambda: getattr(df1, meth)
@@ -509,12 +504,12 @@ def test_cumsum(self):
# axis = 0
cumsum = self.tsframe.cumsum()
expected = self.tsframe.apply(Series.cumsum)
- assert_frame_equal(cumsum, expected)
+ tm.assert_frame_equal(cumsum, expected)
# axis = 1
cumsum = self.tsframe.cumsum(axis=1)
expected = self.tsframe.apply(Series.cumsum, axis=1)
- assert_frame_equal(cumsum, expected)
+ tm.assert_frame_equal(cumsum, expected)
# works
df = DataFrame({'A': np.arange(20)}, index=np.arange(20))
@@ -532,12 +527,12 @@ def test_cumprod(self):
# axis = 0
cumprod = self.tsframe.cumprod()
expected = self.tsframe.apply(Series.cumprod)
- assert_frame_equal(cumprod, expected)
+ tm.assert_frame_equal(cumprod, expected)
# axis = 1
cumprod = self.tsframe.cumprod(axis=1)
expected = self.tsframe.apply(Series.cumprod, axis=1)
- assert_frame_equal(cumprod, expected)
+ tm.assert_frame_equal(cumprod, expected)
# fix issue
cumprod_xs = self.tsframe.cumprod(axis=1)
@@ -574,48 +569,48 @@ def test_rank(self):
exp1 = np.apply_along_axis(rankdata, 1, fvals)
exp1[mask] = np.nan
- assert_almost_equal(ranks0.values, exp0)
- assert_almost_equal(ranks1.values, exp1)
+ tm.assert_almost_equal(ranks0.values, exp0)
+ tm.assert_almost_equal(ranks1.values, exp1)
# integers
df = DataFrame(np.random.randint(0, 5, size=40).reshape((10, 4)))
result = df.rank()
exp = df.astype(float).rank()
- assert_frame_equal(result, exp)
+ tm.assert_frame_equal(result, exp)
result = df.rank(1)
exp = df.astype(float).rank(1)
- assert_frame_equal(result, exp)
+ tm.assert_frame_equal(result, exp)
def test_rank2(self):
df = DataFrame([[1, 3, 2], [1, 2, 3]])
expected = DataFrame([[1.0, 3.0, 2.0], [1, 2, 3]]) / 3.0
result = df.rank(1, pct=True)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
df = DataFrame([[1, 3, 2], [1, 2, 3]])
expected = df.rank(0) / 2.0
result = df.rank(0, pct=True)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
df = DataFrame([['b', 'c', 'a'], ['a', 'c', 'b']])
expected = DataFrame([[2.0, 3.0, 1.0], [1, 3, 2]])
result = df.rank(1, numeric_only=False)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
expected = DataFrame([[2.0, 1.5, 1.0], [1, 1.5, 2]])
result = df.rank(0, numeric_only=False)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
df = DataFrame([['b', np.nan, 'a'], ['a', 'c', 'b']])
expected = DataFrame([[2.0, nan, 1.0], [1.0, 3.0, 2.0]])
result = df.rank(1, numeric_only=False)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
expected = DataFrame([[2.0, nan, 1.0], [1.0, 1.0, 2.0]])
result = df.rank(0, numeric_only=False)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# f7u12, this does not work without extensive workaround
data = [[datetime(2001, 1, 5), nan, datetime(2001, 1, 2)],
@@ -627,12 +622,12 @@ def test_rank2(self):
expected = DataFrame([[2., nan, 1.],
[2., 3., 1.]])
result = df.rank(1, numeric_only=False, ascending=True)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
expected = DataFrame([[1., nan, 2.],
[2., 1., 3.]])
result = df.rank(1, numeric_only=False, ascending=False)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# mixed-type frames
self.mixed_frame['datetime'] = datetime.now()
@@ -640,12 +635,12 @@ def test_rank2(self):
result = self.mixed_frame.rank(1)
expected = self.mixed_frame.rank(1, numeric_only=True)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
df = DataFrame({"a": [1e-20, -5, 1e-20 + 1e-40, 10,
1e60, 1e80, 1e-30]})
exp = DataFrame({"a": [3.5, 1., 3.5, 5., 6., 7., 2.]})
- assert_frame_equal(df.rank(), exp)
+ tm.assert_frame_equal(df.rank(), exp)
def test_rank_na_option(self):
tm._skip_if_no_scipy()
@@ -665,8 +660,8 @@ def test_rank_na_option(self):
exp0 = np.apply_along_axis(rankdata, 0, fvals)
exp1 = np.apply_along_axis(rankdata, 1, fvals)
- assert_almost_equal(ranks0.values, exp0)
- assert_almost_equal(ranks1.values, exp1)
+ tm.assert_almost_equal(ranks0.values, exp0)
+ tm.assert_almost_equal(ranks1.values, exp1)
# top
ranks0 = self.frame.rank(na_option='top')
@@ -680,8 +675,8 @@ def test_rank_na_option(self):
exp0 = np.apply_along_axis(rankdata, 0, fval0)
exp1 = np.apply_along_axis(rankdata, 1, fval1)
- assert_almost_equal(ranks0.values, exp0)
- assert_almost_equal(ranks1.values, exp1)
+ tm.assert_almost_equal(ranks0.values, exp0)
+ tm.assert_almost_equal(ranks1.values, exp1)
# descending
@@ -694,8 +689,8 @@ def test_rank_na_option(self):
exp0 = np.apply_along_axis(rankdata, 0, -fvals)
exp1 = np.apply_along_axis(rankdata, 1, -fvals)
- assert_almost_equal(ranks0.values, exp0)
- assert_almost_equal(ranks1.values, exp1)
+ tm.assert_almost_equal(ranks0.values, exp0)
+ tm.assert_almost_equal(ranks1.values, exp1)
# descending
@@ -711,14 +706,14 @@ def test_rank_na_option(self):
exp0 = np.apply_along_axis(rankdata, 0, -fval0)
exp1 = np.apply_along_axis(rankdata, 1, -fval1)
- assert_almost_equal(ranks0.values, exp0)
- assert_almost_equal(ranks1.values, exp1)
+ tm.assert_numpy_array_equal(ranks0.values, exp0)
+ tm.assert_numpy_array_equal(ranks1.values, exp1)
def test_rank_axis(self):
# check if using axes' names gives the same result
df = pd.DataFrame([[2, 1], [4, 3]])
- assert_frame_equal(df.rank(axis=0), df.rank(axis='index'))
- assert_frame_equal(df.rank(axis=1), df.rank(axis='columns'))
+ tm.assert_frame_equal(df.rank(axis=0), df.rank(axis='index'))
+ tm.assert_frame_equal(df.rank(axis=1), df.rank(axis='columns'))
def test_sem(self):
alt = lambda x: np.std(x, ddof=1) / np.sqrt(len(x))
@@ -727,7 +722,7 @@ def test_sem(self):
result = self.tsframe.sem(ddof=4)
expected = self.tsframe.apply(
lambda x: x.std(ddof=4) / np.sqrt(len(x)))
- assert_almost_equal(result, expected)
+ tm.assert_almost_equal(result, expected)
arr = np.repeat(np.random.random((1, 1000)), 1000, 0)
result = nanops.nansem(arr, axis=0)
@@ -789,7 +784,7 @@ def alt(x):
kurt = df.kurt()
kurt2 = df.kurt(level=0).xs('bar')
- assert_series_equal(kurt, kurt2, check_names=False)
+ tm.assert_series_equal(kurt, kurt2, check_names=False)
self.assertTrue(kurt.name is None)
self.assertEqual(kurt2.name, 'bar')
@@ -827,26 +822,26 @@ def wrapper(x):
result0 = f(axis=0, skipna=False)
result1 = f(axis=1, skipna=False)
- assert_series_equal(result0, frame.apply(wrapper),
- check_dtype=check_dtype,
- check_less_precise=check_less_precise)
+ tm.assert_series_equal(result0, frame.apply(wrapper),
+ check_dtype=check_dtype,
+ check_less_precise=check_less_precise)
# HACK: win32
- assert_series_equal(result1, frame.apply(wrapper, axis=1),
- check_dtype=False,
- check_less_precise=check_less_precise)
+ tm.assert_series_equal(result1, frame.apply(wrapper, axis=1),
+ check_dtype=False,
+ check_less_precise=check_less_precise)
else:
skipna_wrapper = alternative
wrapper = alternative
result0 = f(axis=0)
result1 = f(axis=1)
- assert_series_equal(result0, frame.apply(skipna_wrapper),
- check_dtype=check_dtype,
- check_less_precise=check_less_precise)
+ tm.assert_series_equal(result0, frame.apply(skipna_wrapper),
+ check_dtype=check_dtype,
+ check_less_precise=check_less_precise)
if not tm._incompat_bottleneck_version(name):
- assert_series_equal(result1, frame.apply(skipna_wrapper, axis=1),
- check_dtype=False,
- check_less_precise=check_less_precise)
+ exp = frame.apply(skipna_wrapper, axis=1)
+ tm.assert_series_equal(result1, exp, check_dtype=False,
+ check_less_precise=check_less_precise)
# check dtypes
if check_dtype:
@@ -859,7 +854,7 @@ def wrapper(x):
# assert_series_equal(result, comp)
# bad axis
- assertRaisesRegexp(ValueError, 'No axis named 2', f, axis=2)
+ tm.assertRaisesRegexp(ValueError, 'No axis named 2', f, axis=2)
# make sure works on mixed-type frame
getattr(self.mixed_frame, name)(axis=0)
getattr(self.mixed_frame, name)(axis=1)
@@ -885,20 +880,20 @@ def test_mode(self):
"C": [8, 8, 8, 9, 9, 9],
"D": np.arange(6, dtype='int64'),
"E": [8, 8, 1, 1, 3, 3]})
- assert_frame_equal(df[["A"]].mode(),
- pd.DataFrame({"A": [12]}))
+ tm.assert_frame_equal(df[["A"]].mode(),
+ pd.DataFrame({"A": [12]}))
expected = pd.Series([], dtype='int64', name='D').to_frame()
- assert_frame_equal(df[["D"]].mode(), expected)
+ tm.assert_frame_equal(df[["D"]].mode(), expected)
expected = pd.Series([1, 3, 8], dtype='int64', name='E').to_frame()
- assert_frame_equal(df[["E"]].mode(), expected)
- assert_frame_equal(df[["A", "B"]].mode(),
- pd.DataFrame({"A": [12], "B": [10.]}))
- assert_frame_equal(df.mode(),
- pd.DataFrame({"A": [12, np.nan, np.nan],
- "B": [10, np.nan, np.nan],
- "C": [8, 9, np.nan],
- "D": [np.nan, np.nan, np.nan],
- "E": [1, 3, 8]}))
+ tm.assert_frame_equal(df[["E"]].mode(), expected)
+ tm.assert_frame_equal(df[["A", "B"]].mode(),
+ pd.DataFrame({"A": [12], "B": [10.]}))
+ tm.assert_frame_equal(df.mode(),
+ pd.DataFrame({"A": [12, np.nan, np.nan],
+ "B": [10, np.nan, np.nan],
+ "C": [8, 9, np.nan],
+ "D": [np.nan, np.nan, np.nan],
+ "E": [1, 3, 8]}))
# outputs in sorted order
df["C"] = list(reversed(df["C"]))
@@ -910,7 +905,7 @@ def test_mode(self):
"C": [8, 9]}))
printing.pprint_thing(a)
printing.pprint_thing(b)
- assert_frame_equal(a, b)
+ tm.assert_frame_equal(a, b)
# should work with heterogeneous types
df = pd.DataFrame({"A": np.arange(6, dtype='int64'),
"B": pd.date_range('2011', periods=6),
@@ -918,7 +913,7 @@ def test_mode(self):
exp = pd.DataFrame({"A": pd.Series([], dtype=df["A"].dtype),
"B": pd.Series([], dtype=df["B"].dtype),
"C": pd.Series([], dtype=df["C"].dtype)})
- assert_frame_equal(df.mode(), exp)
+ tm.assert_frame_equal(df.mode(), exp)
# and also when not empty
df.loc[1, "A"] = 0
@@ -929,7 +924,7 @@ def test_mode(self):
dtype=df["B"].dtype),
"C": pd.Series(['e'], dtype=df["C"].dtype)})
- assert_frame_equal(df.mode(), exp)
+ tm.assert_frame_equal(df.mode(), exp)
def test_operators_timedelta64(self):
from datetime import timedelta
@@ -962,8 +957,8 @@ def test_operators_timedelta64(self):
result2 = abs(diffs)
expected = DataFrame(dict(A=df['A'] - df['C'],
B=df['B'] - df['A']))
- assert_frame_equal(result, expected)
- assert_frame_equal(result2, expected)
+ tm.assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result2, expected)
# mixed frame
mixed = diffs.copy()
@@ -982,22 +977,22 @@ def test_operators_timedelta64(self):
'foo', 1, 1.0,
Timestamp('20130101')],
index=mixed.columns)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
# excludes numeric
result = mixed.min(axis=1)
expected = Series([1, 1, 1.], index=[0, 1, 2])
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
# works when only those columns are selected
result = mixed[['A', 'B']].min(1)
expected = Series([timedelta(days=-1)] * 3)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
result = mixed[['A', 'B']].min()
expected = Series([timedelta(seconds=5 * 60 + 5),
timedelta(days=-1)], index=['A', 'B'])
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
# GH 3106
df = DataFrame({'time': date_range('20130102', periods=5),
@@ -1035,13 +1030,13 @@ def test_mean_corner(self):
# unit test when have object data
the_mean = self.mixed_frame.mean(axis=0)
the_sum = self.mixed_frame.sum(axis=0, numeric_only=True)
- self.assertTrue(the_sum.index.equals(the_mean.index))
+ self.assert_index_equal(the_sum.index, the_mean.index)
self.assertTrue(len(the_mean.index) < len(self.mixed_frame.columns))
# xs sum mixed type, just want to know it works...
the_mean = self.mixed_frame.mean(axis=1)
the_sum = self.mixed_frame.sum(axis=1, numeric_only=True)
- self.assertTrue(the_sum.index.equals(the_mean.index))
+ self.assert_index_equal(the_sum.index, the_mean.index)
# take mean of boolean column
self.frame['bool'] = self.frame['A'] > 0
@@ -1070,8 +1065,8 @@ def test_count_objects(self):
dm = DataFrame(self.mixed_frame._series)
df = DataFrame(self.mixed_frame._series)
- assert_series_equal(dm.count(), df.count())
- assert_series_equal(dm.count(1), df.count(1))
+ tm.assert_series_equal(dm.count(), df.count())
+ tm.assert_series_equal(dm.count(1), df.count(1))
def test_cumsum_corner(self):
dm = DataFrame(np.arange(20).reshape(4, 5),
@@ -1094,9 +1089,9 @@ def test_idxmin(self):
for axis in [0, 1]:
for df in [frame, self.intframe]:
result = df.idxmin(axis=axis, skipna=skipna)
- expected = df.apply(
- Series.idxmin, axis=axis, skipna=skipna)
- assert_series_equal(result, expected)
+ expected = df.apply(Series.idxmin, axis=axis,
+ skipna=skipna)
+ tm.assert_series_equal(result, expected)
self.assertRaises(ValueError, frame.idxmin, axis=2)
@@ -1108,9 +1103,9 @@ def test_idxmax(self):
for axis in [0, 1]:
for df in [frame, self.intframe]:
result = df.idxmax(axis=axis, skipna=skipna)
- expected = df.apply(
- Series.idxmax, axis=axis, skipna=skipna)
- assert_series_equal(result, expected)
+ expected = df.apply(Series.idxmax, axis=axis,
+ skipna=skipna)
+ tm.assert_series_equal(result, expected)
self.assertRaises(ValueError, frame.idxmax, axis=2)
@@ -1169,18 +1164,18 @@ def wrapper(x):
result0 = f(axis=0, skipna=False)
result1 = f(axis=1, skipna=False)
- assert_series_equal(result0, frame.apply(wrapper))
- assert_series_equal(result1, frame.apply(wrapper, axis=1),
- check_dtype=False) # HACK: win32
+ tm.assert_series_equal(result0, frame.apply(wrapper))
+ tm.assert_series_equal(result1, frame.apply(wrapper, axis=1),
+ check_dtype=False) # HACK: win32
else:
skipna_wrapper = alternative
wrapper = alternative
result0 = f(axis=0)
result1 = f(axis=1)
- assert_series_equal(result0, frame.apply(skipna_wrapper))
- assert_series_equal(result1, frame.apply(skipna_wrapper, axis=1),
- check_dtype=False)
+ tm.assert_series_equal(result0, frame.apply(skipna_wrapper))
+ tm.assert_series_equal(result1, frame.apply(skipna_wrapper, axis=1),
+ check_dtype=False)
# result = f(axis=1)
# comp = frame.apply(alternative, axis=1).reindex(result.index)
@@ -1230,7 +1225,7 @@ def test_nlargest(self):
'b': list(ascii_lowercase[:10])})
result = df.nlargest(5, 'a')
expected = df.sort_values('a', ascending=False).head(5)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_nlargest_multiple_columns(self):
from string import ascii_lowercase
@@ -1239,7 +1234,7 @@ def test_nlargest_multiple_columns(self):
'c': np.random.permutation(10).astype('float64')})
result = df.nlargest(5, ['a', 'b'])
expected = df.sort_values(['a', 'b'], ascending=False).head(5)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_nsmallest(self):
from string import ascii_lowercase
@@ -1247,7 +1242,7 @@ def test_nsmallest(self):
'b': list(ascii_lowercase[:10])})
result = df.nsmallest(5, 'a')
expected = df.sort_values('a').head(5)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_nsmallest_multiple_columns(self):
from string import ascii_lowercase
@@ -1256,7 +1251,7 @@ def test_nsmallest_multiple_columns(self):
'c': np.random.permutation(10).astype('float64')})
result = df.nsmallest(5, ['a', 'c'])
expected = df.sort_values(['a', 'c']).head(5)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# ----------------------------------------------------------------------
# Isin
@@ -1270,13 +1265,13 @@ def test_isin(self):
result = df.isin(other)
expected = DataFrame([df.loc[s].isin(other) for s in df.index])
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_isin_empty(self):
df = DataFrame({'A': ['a', 'b', 'c'], 'B': ['a', 'e', 'f']})
result = df.isin([])
expected = pd.DataFrame(False, df.index, df.columns)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_isin_dict(self):
df = DataFrame({'A': ['a', 'b', 'c'], 'B': ['a', 'e', 'f']})
@@ -1286,7 +1281,7 @@ def test_isin_dict(self):
expected.loc[0, 'A'] = True
result = df.isin(d)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# non unique columns
df = DataFrame({'A': ['a', 'b', 'c'], 'B': ['a', 'e', 'f']})
@@ -1294,7 +1289,7 @@ def test_isin_dict(self):
expected = DataFrame(False, df.index, df.columns)
expected.loc[0, 'A'] = True
result = df.isin(d)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_isin_with_string_scalar(self):
# GH4763
@@ -1314,13 +1309,13 @@ def test_isin_df(self):
result = df1.isin(df2)
expected['A'].loc[[1, 3]] = True
expected['B'].loc[[0, 2]] = True
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# partial overlapping columns
df2.columns = ['A', 'C']
result = df1.isin(df2)
expected['B'] = False
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_isin_df_dupe_values(self):
df1 = DataFrame({'A': [1, 2, 3, 4], 'B': [2, np.nan, 4, 4]})
@@ -1348,7 +1343,7 @@ def test_isin_dupe_self(self):
expected = DataFrame(False, index=df.index, columns=df.columns)
expected.loc[0] = True
expected.iloc[1, 1] = True
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_isin_against_series(self):
df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': [2, np.nan, 4, 4]},
@@ -1358,7 +1353,7 @@ def test_isin_against_series(self):
expected['A'].loc['a'] = True
expected.loc['d'] = True
result = df.isin(s)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_isin_multiIndex(self):
idx = MultiIndex.from_tuples([(0, 'a', 'foo'), (0, 'a', 'bar'),
@@ -1374,7 +1369,7 @@ def test_isin_multiIndex(self):
# against regular index
expected = DataFrame(False, index=df1.index, columns=df1.columns)
result = df1.isin(df2)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
df2.index = idx
expected = df2.values.astype(np.bool)
@@ -1382,7 +1377,7 @@ def test_isin_multiIndex(self):
expected = DataFrame(expected, columns=['A', 'B'], index=idx)
result = df1.isin(df2)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# ----------------------------------------------------------------------
# Row deduplication
@@ -1398,43 +1393,43 @@ def test_drop_duplicates(self):
# single column
result = df.drop_duplicates('AAA')
expected = df[:2]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates('AAA', keep='last')
expected = df.ix[[6, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates('AAA', keep=False)
expected = df.ix[[]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
self.assertEqual(len(result), 0)
# deprecate take_last
with tm.assert_produces_warning(FutureWarning):
result = df.drop_duplicates('AAA', take_last=True)
expected = df.ix[[6, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# multi column
expected = df.ix[[0, 1, 2, 3]]
result = df.drop_duplicates(np.array(['AAA', 'B']))
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates(['AAA', 'B'])
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates(('AAA', 'B'), keep='last')
expected = df.ix[[0, 5, 6, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates(('AAA', 'B'), keep=False)
expected = df.ix[[0]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# deprecate take_last
with tm.assert_produces_warning(FutureWarning):
result = df.drop_duplicates(('AAA', 'B'), take_last=True)
expected = df.ix[[0, 5, 6, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# consider everything
df2 = df.ix[:, ['AAA', 'B', 'C']]
@@ -1442,64 +1437,64 @@ def test_drop_duplicates(self):
result = df2.drop_duplicates()
# in this case only
expected = df2.drop_duplicates(['AAA', 'B'])
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df2.drop_duplicates(keep='last')
expected = df2.drop_duplicates(['AAA', 'B'], keep='last')
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df2.drop_duplicates(keep=False)
expected = df2.drop_duplicates(['AAA', 'B'], keep=False)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# deprecate take_last
with tm.assert_produces_warning(FutureWarning):
result = df2.drop_duplicates(take_last=True)
with tm.assert_produces_warning(FutureWarning):
expected = df2.drop_duplicates(['AAA', 'B'], take_last=True)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# integers
result = df.drop_duplicates('C')
expected = df.iloc[[0, 2]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates('C', keep='last')
expected = df.iloc[[-2, -1]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
df['E'] = df['C'].astype('int8')
result = df.drop_duplicates('E')
expected = df.iloc[[0, 2]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates('E', keep='last')
expected = df.iloc[[-2, -1]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# GH 11376
df = pd.DataFrame({'x': [7, 6, 3, 3, 4, 8, 0],
'y': [0, 6, 5, 5, 9, 1, 2]})
expected = df.loc[df.index != 3]
- assert_frame_equal(df.drop_duplicates(), expected)
+ tm.assert_frame_equal(df.drop_duplicates(), expected)
df = pd.DataFrame([[1, 0], [0, 2]])
- assert_frame_equal(df.drop_duplicates(), df)
+ tm.assert_frame_equal(df.drop_duplicates(), df)
df = pd.DataFrame([[-2, 0], [0, -4]])
- assert_frame_equal(df.drop_duplicates(), df)
+ tm.assert_frame_equal(df.drop_duplicates(), df)
x = np.iinfo(np.int64).max / 3 * 2
df = pd.DataFrame([[-x, x], [0, x + 4]])
- assert_frame_equal(df.drop_duplicates(), df)
+ tm.assert_frame_equal(df.drop_duplicates(), df)
df = pd.DataFrame([[-x, x], [x, x + 4]])
- assert_frame_equal(df.drop_duplicates(), df)
+ tm.assert_frame_equal(df.drop_duplicates(), df)
# GH 11864
df = pd.DataFrame([i] * 9 for i in range(16))
df = df.append([[1] + [0] * 8], ignore_index=True)
for keep in ['first', 'last', False]:
- assert_equal(df.duplicated(keep=keep).sum(), 0)
+ self.assertEqual(df.duplicated(keep=keep).sum(), 0)
def test_drop_duplicates_for_take_all(self):
df = DataFrame({'AAA': ['foo', 'bar', 'baz', 'bar',
@@ -1512,28 +1507,28 @@ def test_drop_duplicates_for_take_all(self):
# single column
result = df.drop_duplicates('AAA')
expected = df.iloc[[0, 1, 2, 6]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates('AAA', keep='last')
expected = df.iloc[[2, 5, 6, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates('AAA', keep=False)
expected = df.iloc[[2, 6]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# multiple columns
result = df.drop_duplicates(['AAA', 'B'])
expected = df.iloc[[0, 1, 2, 3, 4, 6]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates(['AAA', 'B'], keep='last')
expected = df.iloc[[0, 1, 2, 5, 6, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates(['AAA', 'B'], keep=False)
expected = df.iloc[[0, 1, 2, 6]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_drop_duplicates_tuple(self):
df = DataFrame({('AA', 'AB'): ['foo', 'bar', 'foo', 'bar',
@@ -1546,27 +1541,27 @@ def test_drop_duplicates_tuple(self):
# single column
result = df.drop_duplicates(('AA', 'AB'))
expected = df[:2]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates(('AA', 'AB'), keep='last')
expected = df.ix[[6, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates(('AA', 'AB'), keep=False)
expected = df.ix[[]] # empty df
self.assertEqual(len(result), 0)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# deprecate take_last
with tm.assert_produces_warning(FutureWarning):
result = df.drop_duplicates(('AA', 'AB'), take_last=True)
expected = df.ix[[6, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# multi column
expected = df.ix[[0, 1, 2, 3]]
result = df.drop_duplicates((('AA', 'AB'), 'B'))
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_drop_duplicates_NA(self):
# none
@@ -1580,41 +1575,41 @@ def test_drop_duplicates_NA(self):
# single column
result = df.drop_duplicates('A')
expected = df.ix[[0, 2, 3]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates('A', keep='last')
expected = df.ix[[1, 6, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates('A', keep=False)
expected = df.ix[[]] # empty df
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
self.assertEqual(len(result), 0)
# deprecate take_last
with tm.assert_produces_warning(FutureWarning):
result = df.drop_duplicates('A', take_last=True)
expected = df.ix[[1, 6, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# multi column
result = df.drop_duplicates(['A', 'B'])
expected = df.ix[[0, 2, 3, 6]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates(['A', 'B'], keep='last')
expected = df.ix[[1, 5, 6, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates(['A', 'B'], keep=False)
expected = df.ix[[6]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# deprecate take_last
with tm.assert_produces_warning(FutureWarning):
result = df.drop_duplicates(['A', 'B'], take_last=True)
expected = df.ix[[1, 5, 6, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# nan
df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
@@ -1627,41 +1622,41 @@ def test_drop_duplicates_NA(self):
# single column
result = df.drop_duplicates('C')
expected = df[:2]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates('C', keep='last')
expected = df.ix[[3, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates('C', keep=False)
expected = df.ix[[]] # empty df
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
self.assertEqual(len(result), 0)
# deprecate take_last
with tm.assert_produces_warning(FutureWarning):
result = df.drop_duplicates('C', take_last=True)
expected = df.ix[[3, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# multi column
result = df.drop_duplicates(['C', 'B'])
expected = df.ix[[0, 1, 2, 4]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates(['C', 'B'], keep='last')
expected = df.ix[[1, 3, 6, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates(['C', 'B'], keep=False)
expected = df.ix[[1]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# deprecate take_last
with tm.assert_produces_warning(FutureWarning):
result = df.drop_duplicates(['C', 'B'], take_last=True)
expected = df.ix[[1, 3, 6, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_drop_duplicates_NA_for_take_all(self):
# none
@@ -1672,30 +1667,30 @@ def test_drop_duplicates_NA_for_take_all(self):
# single column
result = df.drop_duplicates('A')
expected = df.iloc[[0, 2, 3, 5, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates('A', keep='last')
expected = df.iloc[[1, 4, 5, 6, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates('A', keep=False)
expected = df.iloc[[5, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# nan
# single column
result = df.drop_duplicates('C')
expected = df.iloc[[0, 1, 5, 6]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates('C', keep='last')
expected = df.iloc[[3, 5, 6, 7]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = df.drop_duplicates('C', keep=False)
expected = df.iloc[[5, 6]]
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_drop_duplicates_inplace(self):
orig = DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
@@ -1710,19 +1705,19 @@ def test_drop_duplicates_inplace(self):
df.drop_duplicates('A', inplace=True)
expected = orig[:2]
result = df
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
df = orig.copy()
df.drop_duplicates('A', keep='last', inplace=True)
expected = orig.ix[[6, 7]]
result = df
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
df = orig.copy()
df.drop_duplicates('A', keep=False, inplace=True)
expected = orig.ix[[]]
result = df
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
self.assertEqual(len(df), 0)
# deprecate take_last
@@ -1731,26 +1726,26 @@ def test_drop_duplicates_inplace(self):
df.drop_duplicates('A', take_last=True, inplace=True)
expected = orig.ix[[6, 7]]
result = df
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# multi column
df = orig.copy()
df.drop_duplicates(['A', 'B'], inplace=True)
expected = orig.ix[[0, 1, 2, 3]]
result = df
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
df = orig.copy()
df.drop_duplicates(['A', 'B'], keep='last', inplace=True)
expected = orig.ix[[0, 5, 6, 7]]
result = df
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
df = orig.copy()
df.drop_duplicates(['A', 'B'], keep=False, inplace=True)
expected = orig.ix[[0]]
result = df
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# deprecate take_last
df = orig.copy()
@@ -1758,7 +1753,7 @@ def test_drop_duplicates_inplace(self):
df.drop_duplicates(['A', 'B'], take_last=True, inplace=True)
expected = orig.ix[[0, 5, 6, 7]]
result = df
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# consider everything
orig2 = orig.ix[:, ['A', 'B', 'C']].copy()
@@ -1768,19 +1763,19 @@ def test_drop_duplicates_inplace(self):
# in this case only
expected = orig2.drop_duplicates(['A', 'B'])
result = df2
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
df2 = orig2.copy()
df2.drop_duplicates(keep='last', inplace=True)
expected = orig2.drop_duplicates(['A', 'B'], keep='last')
result = df2
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
df2 = orig2.copy()
df2.drop_duplicates(keep=False, inplace=True)
expected = orig2.drop_duplicates(['A', 'B'], keep=False)
result = df2
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# deprecate take_last
df2 = orig2.copy()
@@ -1789,7 +1784,7 @@ def test_drop_duplicates_inplace(self):
with tm.assert_produces_warning(FutureWarning):
expected = orig2.drop_duplicates(['A', 'B'], take_last=True)
result = df2
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# Rounding
@@ -1798,26 +1793,26 @@ def test_round(self):
# Test that rounding an empty DataFrame does nothing
df = DataFrame()
- assert_frame_equal(df, df.round())
+ tm.assert_frame_equal(df, df.round())
# Here's the test frame we'll be working with
- df = DataFrame(
- {'col1': [1.123, 2.123, 3.123], 'col2': [1.234, 2.234, 3.234]})
+ df = DataFrame({'col1': [1.123, 2.123, 3.123],
+ 'col2': [1.234, 2.234, 3.234]})
# Default round to integer (i.e. decimals=0)
expected_rounded = DataFrame(
{'col1': [1., 2., 3.], 'col2': [1., 2., 3.]})
- assert_frame_equal(df.round(), expected_rounded)
+ tm.assert_frame_equal(df.round(), expected_rounded)
# Round with an integer
decimals = 2
- expected_rounded = DataFrame(
- {'col1': [1.12, 2.12, 3.12], 'col2': [1.23, 2.23, 3.23]})
- assert_frame_equal(df.round(decimals), expected_rounded)
+ expected_rounded = DataFrame({'col1': [1.12, 2.12, 3.12],
+ 'col2': [1.23, 2.23, 3.23]})
+ tm.assert_frame_equal(df.round(decimals), expected_rounded)
# This should also work with np.round (since np.round dispatches to
# df.round)
- assert_frame_equal(np.round(df, decimals), expected_rounded)
+ tm.assert_frame_equal(np.round(df, decimals), expected_rounded)
# Round with a list
round_list = [1, 2]
@@ -1828,19 +1823,19 @@ def test_round(self):
expected_rounded = DataFrame(
{'col1': [1.1, 2.1, 3.1], 'col2': [1.23, 2.23, 3.23]})
round_dict = {'col1': 1, 'col2': 2}
- assert_frame_equal(df.round(round_dict), expected_rounded)
+ tm.assert_frame_equal(df.round(round_dict), expected_rounded)
# Incomplete dict
expected_partially_rounded = DataFrame(
{'col1': [1.123, 2.123, 3.123], 'col2': [1.2, 2.2, 3.2]})
partial_round_dict = {'col2': 1}
- assert_frame_equal(
- df.round(partial_round_dict), expected_partially_rounded)
+ tm.assert_frame_equal(df.round(partial_round_dict),
+ expected_partially_rounded)
# Dict with unknown elements
wrong_round_dict = {'col3': 2, 'col2': 1}
- assert_frame_equal(
- df.round(wrong_round_dict), expected_partially_rounded)
+ tm.assert_frame_equal(df.round(wrong_round_dict),
+ expected_partially_rounded)
# float input to `decimals`
non_int_round_dict = {'col1': 1, 'col2': 0.5}
@@ -1879,8 +1874,8 @@ def test_round(self):
big_df = df * 100
expected_neg_rounded = DataFrame(
{'col1': [110., 210, 310], 'col2': [100., 200, 300]})
- assert_frame_equal(
- big_df.round(negative_round_dict), expected_neg_rounded)
+ tm.assert_frame_equal(big_df.round(negative_round_dict),
+ expected_neg_rounded)
# nan in Series round
nan_round_Series = Series({'col1': nan, 'col2': 1})
@@ -1899,7 +1894,7 @@ def test_round(self):
df.round(nan_round_Series)
# Make sure this doesn't break existing Series.round
- assert_series_equal(df['col1'].round(1), expected_rounded['col1'])
+ tm.assert_series_equal(df['col1'].round(1), expected_rounded['col1'])
# named columns
# GH 11986
@@ -1908,20 +1903,20 @@ def test_round(self):
{'col1': [1.12, 2.12, 3.12], 'col2': [1.23, 2.23, 3.23]})
df.columns.name = "cols"
expected_rounded.columns.name = "cols"
- assert_frame_equal(df.round(decimals), expected_rounded)
+ tm.assert_frame_equal(df.round(decimals), expected_rounded)
# interaction of named columns & series
- assert_series_equal(df['col1'].round(decimals),
- expected_rounded['col1'])
- assert_series_equal(df.round(decimals)['col1'],
- expected_rounded['col1'])
+ tm.assert_series_equal(df['col1'].round(decimals),
+ expected_rounded['col1'])
+ tm.assert_series_equal(df.round(decimals)['col1'],
+ expected_rounded['col1'])
def test_numpy_round(self):
# See gh-12600
df = DataFrame([[1.53, 1.36], [0.06, 7.01]])
out = np.round(df, decimals=0)
expected = DataFrame([[2., 1.], [0., 7.]])
- assert_frame_equal(out, expected)
+ tm.assert_frame_equal(out, expected)
msg = "the 'out' parameter is not supported"
with tm.assertRaisesRegexp(ValueError, msg):
@@ -1935,12 +1930,12 @@ def test_round_mixed_type(self):
round_0 = DataFrame({'col1': [1., 2., 3., 4.],
'col2': ['1', 'a', 'c', 'f'],
'col3': date_range('20111111', periods=4)})
- assert_frame_equal(df.round(), round_0)
- assert_frame_equal(df.round(1), df)
- assert_frame_equal(df.round({'col1': 1}), df)
- assert_frame_equal(df.round({'col1': 0}), round_0)
- assert_frame_equal(df.round({'col1': 0, 'col2': 1}), round_0)
- assert_frame_equal(df.round({'col3': 1}), df)
+ tm.assert_frame_equal(df.round(), round_0)
+ tm.assert_frame_equal(df.round(1), df)
+ tm.assert_frame_equal(df.round({'col1': 1}), df)
+ tm.assert_frame_equal(df.round({'col1': 0}), round_0)
+ tm.assert_frame_equal(df.round({'col1': 0, 'col2': 1}), round_0)
+ tm.assert_frame_equal(df.round({'col3': 1}), df)
def test_round_issue(self):
# GH11611
@@ -1950,7 +1945,7 @@ def test_round_issue(self):
dfs = pd.concat((df, df), axis=1)
rounded = dfs.round()
- self.assertTrue(rounded.index.equals(dfs.index))
+ self.assert_index_equal(rounded.index, dfs.index)
decimals = pd.Series([1, 0, 2], index=['A', 'B', 'A'])
self.assertRaises(ValueError, df.round, decimals)
@@ -1968,7 +1963,7 @@ def test_built_in_round(self):
# Default round to integer (i.e. decimals=0)
expected_rounded = DataFrame(
{'col1': [1., 2., 3.], 'col2': [1., 2., 3.]})
- assert_frame_equal(round(df), expected_rounded)
+ tm.assert_frame_equal(round(df), expected_rounded)
# Clip
@@ -2015,14 +2010,14 @@ def test_clip_against_series(self):
mask = ~lb_mask & ~ub_mask
result = clipped_df.loc[lb_mask, i]
- assert_series_equal(result, lb[lb_mask], check_names=False)
+ tm.assert_series_equal(result, lb[lb_mask], check_names=False)
self.assertEqual(result.name, i)
result = clipped_df.loc[ub_mask, i]
- assert_series_equal(result, ub[ub_mask], check_names=False)
+ tm.assert_series_equal(result, ub[ub_mask], check_names=False)
self.assertEqual(result.name, i)
- assert_series_equal(clipped_df.loc[mask, i], df.loc[mask, i])
+ tm.assert_series_equal(clipped_df.loc[mask, i], df.loc[mask, i])
def test_clip_against_frame(self):
df = DataFrame(np.random.randn(1000, 2))
@@ -2035,9 +2030,9 @@ def test_clip_against_frame(self):
ub_mask = df >= ub
mask = ~lb_mask & ~ub_mask
- assert_frame_equal(clipped_df[lb_mask], lb[lb_mask])
- assert_frame_equal(clipped_df[ub_mask], ub[ub_mask])
- assert_frame_equal(clipped_df[mask], df[mask])
+ tm.assert_frame_equal(clipped_df[lb_mask], lb[lb_mask])
+ tm.assert_frame_equal(clipped_df[ub_mask], ub[ub_mask])
+ tm.assert_frame_equal(clipped_df[mask], df[mask])
# Matrix-like
@@ -2054,15 +2049,15 @@ def test_dot(self):
# Check alignment
b1 = b.reindex(index=reversed(b.index))
result = a.dot(b)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# Check series argument
result = a.dot(b['one'])
- assert_series_equal(result, expected['one'], check_names=False)
+ tm.assert_series_equal(result, expected['one'], check_names=False)
self.assertTrue(result.name is None)
result = a.dot(b1['one'])
- assert_series_equal(result, expected['one'], check_names=False)
+ tm.assert_series_equal(result, expected['one'], check_names=False)
self.assertTrue(result.name is None)
# can pass correct-length arrays
@@ -2070,9 +2065,9 @@ def test_dot(self):
result = a.dot(row)
exp = a.dot(a.ix[0])
- assert_series_equal(result, exp)
+ tm.assert_series_equal(result, exp)
- with assertRaisesRegexp(ValueError, 'Dot product shape mismatch'):
+ with tm.assertRaisesRegexp(ValueError, 'Dot product shape mismatch'):
a.dot(row[:-1])
a = np.random.rand(1, 5)
@@ -2089,7 +2084,8 @@ def test_dot(self):
df = DataFrame(randn(3, 4), index=[1, 2, 3], columns=lrange(4))
df2 = DataFrame(randn(5, 3), index=lrange(5), columns=[1, 2, 3])
- assertRaisesRegexp(ValueError, 'aligned', df.dot, df2)
+ with tm.assertRaisesRegexp(ValueError, 'aligned'):
+ df.dot(df2)
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
diff --git a/pandas/tests/frame/test_axis_select_reindex.py b/pandas/tests/frame/test_axis_select_reindex.py
index 09dd0f3b14812..07fe28f13b7d0 100644
--- a/pandas/tests/frame/test_axis_select_reindex.py
+++ b/pandas/tests/frame/test_axis_select_reindex.py
@@ -221,7 +221,7 @@ def test_reindex(self):
# pass non-Index
newFrame = self.frame.reindex(list(self.ts1.index))
- self.assertTrue(newFrame.index.equals(self.ts1.index))
+ self.assert_index_equal(newFrame.index, self.ts1.index)
# copy with no axes
result = self.frame.reindex()
@@ -381,7 +381,7 @@ def test_align(self):
# axis = 0
other = self.frame.ix[:-5, :3]
af, bf = self.frame.align(other, axis=0, fill_value=-1)
- self.assertTrue(bf.columns.equals(other.columns))
+ self.assert_index_equal(bf.columns, other.columns)
# test fill value
join_idx = self.frame.index.join(other.index)
diff_a = self.frame.index.difference(join_idx)
@@ -391,15 +391,15 @@ def test_align(self):
self.assertTrue((diff_a_vals == -1).all())
af, bf = self.frame.align(other, join='right', axis=0)
- self.assertTrue(bf.columns.equals(other.columns))
- self.assertTrue(bf.index.equals(other.index))
- self.assertTrue(af.index.equals(other.index))
+ self.assert_index_equal(bf.columns, other.columns)
+ self.assert_index_equal(bf.index, other.index)
+ self.assert_index_equal(af.index, other.index)
# axis = 1
other = self.frame.ix[:-5, :3].copy()
af, bf = self.frame.align(other, axis=1)
- self.assertTrue(bf.columns.equals(self.frame.columns))
- self.assertTrue(bf.index.equals(other.index))
+ self.assert_index_equal(bf.columns, self.frame.columns)
+ self.assert_index_equal(bf.index, other.index)
# test fill value
join_idx = self.frame.index.join(other.index)
@@ -413,35 +413,35 @@ def test_align(self):
self.assertTrue((diff_a_vals == -1).all())
af, bf = self.frame.align(other, join='inner', axis=1)
- self.assertTrue(bf.columns.equals(other.columns))
+ self.assert_index_equal(bf.columns, other.columns)
af, bf = self.frame.align(other, join='inner', axis=1, method='pad')
- self.assertTrue(bf.columns.equals(other.columns))
+ self.assert_index_equal(bf.columns, other.columns)
# test other non-float types
af, bf = self.intframe.align(other, join='inner', axis=1, method='pad')
- self.assertTrue(bf.columns.equals(other.columns))
+ self.assert_index_equal(bf.columns, other.columns)
af, bf = self.mixed_frame.align(self.mixed_frame,
join='inner', axis=1, method='pad')
- self.assertTrue(bf.columns.equals(self.mixed_frame.columns))
+ self.assert_index_equal(bf.columns, self.mixed_frame.columns)
af, bf = self.frame.align(other.ix[:, 0], join='inner', axis=1,
method=None, fill_value=None)
- self.assertTrue(bf.index.equals(Index([])))
+ self.assert_index_equal(bf.index, Index([]))
af, bf = self.frame.align(other.ix[:, 0], join='inner', axis=1,
method=None, fill_value=0)
- self.assertTrue(bf.index.equals(Index([])))
+ self.assert_index_equal(bf.index, Index([]))
# mixed floats/ints
af, bf = self.mixed_float.align(other.ix[:, 0], join='inner', axis=1,
method=None, fill_value=0)
- self.assertTrue(bf.index.equals(Index([])))
+ self.assert_index_equal(bf.index, Index([]))
af, bf = self.mixed_int.align(other.ix[:, 0], join='inner', axis=1,
method=None, fill_value=0)
- self.assertTrue(bf.index.equals(Index([])))
+ self.assert_index_equal(bf.index, Index([]))
# try to align dataframe to series along bad axis
self.assertRaises(ValueError, self.frame.align, af.ix[0, :3],
@@ -810,10 +810,9 @@ def test_reindex_corner(self):
index = Index(['a', 'b', 'c'])
dm = self.empty.reindex(index=[1, 2, 3])
reindexed = dm.reindex(columns=index)
- self.assertTrue(reindexed.columns.equals(index))
+ self.assert_index_equal(reindexed.columns, index)
# ints are weird
-
smaller = self.intframe.reindex(columns=['A', 'B', 'E'])
self.assertEqual(smaller['E'].dtype, np.float64)
diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py
index f337bf48c05ee..0421cf2ba42d2 100644
--- a/pandas/tests/frame/test_block_internals.py
+++ b/pandas/tests/frame/test_block_internals.py
@@ -505,8 +505,8 @@ def test_get_X_columns(self):
'd': [None, None, None],
'e': [3.14, 0.577, 2.773]})
- self.assert_numpy_array_equal(df._get_numeric_data().columns,
- ['a', 'b', 'e'])
+ self.assert_index_equal(df._get_numeric_data().columns,
+ pd.Index(['a', 'b', 'e']))
def test_strange_column_corruption_issue(self):
# (wesm) Unclear how exactly this is related to internal matters
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 6913df765862d..a050d74f0fc51 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -24,12 +24,6 @@
import pandas as pd
import pandas.core.common as com
import pandas.lib as lib
-
-from pandas.util.testing import (assert_numpy_array_equal,
- assert_series_equal,
- assert_frame_equal,
- assertRaisesRegexp)
-
import pandas.util.testing as tm
from pandas.tests.frame.common import TestData
@@ -171,16 +165,16 @@ def test_constructor_rec(self):
index = self.frame.index
df = DataFrame(rec)
- self.assert_numpy_array_equal(df.columns, rec.dtype.names)
+ self.assert_index_equal(df.columns, pd.Index(rec.dtype.names))
df2 = DataFrame(rec, index=index)
- self.assert_numpy_array_equal(df2.columns, rec.dtype.names)
- self.assertTrue(df2.index.equals(index))
+ self.assert_index_equal(df2.columns, pd.Index(rec.dtype.names))
+ self.assert_index_equal(df2.index, index)
rng = np.arange(len(rec))[::-1]
df3 = DataFrame(rec, index=rng, columns=['C', 'B'])
expected = DataFrame(rec, index=rng).reindex(columns=['C', 'B'])
- assert_frame_equal(df3, expected)
+ tm.assert_frame_equal(df3, expected)
def test_constructor_bool(self):
df = DataFrame({0: np.ones(10, dtype=bool),
@@ -223,6 +217,7 @@ def test_constructor_dict(self):
self.assertEqual(len(self.ts2), 25)
tm.assert_series_equal(self.ts1, frame['col1'], check_names=False)
+
exp = pd.Series(np.concatenate([[np.nan] * 5, self.ts2.values]),
index=self.ts1.index, name='col2')
tm.assert_series_equal(exp, frame['col2'])
@@ -245,7 +240,7 @@ def test_constructor_dict(self):
# Length-one dict micro-optimization
frame = DataFrame({'A': {'1': 1, '2': 2}})
- self.assert_numpy_array_equal(frame.index, ['1', '2'])
+ self.assert_index_equal(frame.index, pd.Index(['1', '2']))
# empty dict plus index
idx = Index([0, 1, 2])
@@ -261,7 +256,7 @@ def test_constructor_dict(self):
# with dict of empty list and Series
frame = DataFrame({'A': [], 'B': []}, columns=['A', 'B'])
- self.assertTrue(frame.index.equals(Index([])))
+ self.assert_index_equal(frame.index, Index([], dtype=np.int64))
# GH10856
# dict with scalar values should raise error, even if columns passed
@@ -290,37 +285,37 @@ def test_constructor_multi_index(self):
def test_constructor_error_msgs(self):
msg = "Empty data passed with indices specified."
# passing an empty array with columns specified.
- with assertRaisesRegexp(ValueError, msg):
+ with tm.assertRaisesRegexp(ValueError, msg):
DataFrame(np.empty(0), columns=list('abc'))
msg = "Mixing dicts with non-Series may lead to ambiguous ordering."
# mix dict and array, wrong size
- with assertRaisesRegexp(ValueError, msg):
+ with tm.assertRaisesRegexp(ValueError, msg):
DataFrame({'A': {'a': 'a', 'b': 'b'},
'B': ['a', 'b', 'c']})
# wrong size ndarray, GH 3105
msg = "Shape of passed values is \(3, 4\), indices imply \(3, 3\)"
- with assertRaisesRegexp(ValueError, msg):
+ with tm.assertRaisesRegexp(ValueError, msg):
DataFrame(np.arange(12).reshape((4, 3)),
columns=['foo', 'bar', 'baz'],
index=pd.date_range('2000-01-01', periods=3))
# higher dim raise exception
- with assertRaisesRegexp(ValueError, 'Must pass 2-d input'):
+ with tm.assertRaisesRegexp(ValueError, 'Must pass 2-d input'):
DataFrame(np.zeros((3, 3, 3)), columns=['A', 'B', 'C'], index=[1])
# wrong size axis labels
- with assertRaisesRegexp(ValueError, "Shape of passed values is "
- "\(3, 2\), indices imply \(3, 1\)"):
+ with tm.assertRaisesRegexp(ValueError, "Shape of passed values is "
+ "\(3, 2\), indices imply \(3, 1\)"):
DataFrame(np.random.rand(2, 3), columns=['A', 'B', 'C'], index=[1])
- with assertRaisesRegexp(ValueError, "Shape of passed values is "
- "\(3, 2\), indices imply \(2, 2\)"):
+ with tm.assertRaisesRegexp(ValueError, "Shape of passed values is "
+ "\(3, 2\), indices imply \(2, 2\)"):
DataFrame(np.random.rand(2, 3), columns=['A', 'B'], index=[1, 2])
- with assertRaisesRegexp(ValueError, 'If using all scalar values, you '
- 'must pass an index'):
+ with tm.assertRaisesRegexp(ValueError, 'If using all scalar values, '
+ 'you must pass an index'):
DataFrame({'a': False, 'b': True})
def test_constructor_with_embedded_frames(self):
@@ -333,10 +328,10 @@ def test_constructor_with_embedded_frames(self):
str(df2)
result = df2.loc[0, 0]
- assert_frame_equal(result, df1)
+ tm.assert_frame_equal(result, df1)
result = df2.loc[1, 0]
- assert_frame_equal(result, df1 + 10)
+ tm.assert_frame_equal(result, df1 + 10)
def test_constructor_subclass_dict(self):
# Test for passing dict subclass to constructor
@@ -345,11 +340,11 @@ def test_constructor_subclass_dict(self):
df = DataFrame(data)
refdf = DataFrame(dict((col, dict(compat.iteritems(val)))
for col, val in compat.iteritems(data)))
- assert_frame_equal(refdf, df)
+ tm.assert_frame_equal(refdf, df)
data = tm.TestSubDict(compat.iteritems(data))
df = DataFrame(data)
- assert_frame_equal(refdf, df)
+ tm.assert_frame_equal(refdf, df)
# try with defaultdict
from collections import defaultdict
@@ -360,10 +355,10 @@ def test_constructor_subclass_dict(self):
dct.update(v.to_dict())
data[k] = dct
frame = DataFrame(data)
- assert_frame_equal(self.frame.sort_index(), frame)
+ tm.assert_frame_equal(self.frame.sort_index(), frame)
def test_constructor_dict_block(self):
- expected = [[4., 3., 2., 1.]]
+ expected = np.array([[4., 3., 2., 1.]])
df = DataFrame({'d': [4.], 'c': [3.], 'b': [2.], 'a': [1.]},
columns=['d', 'c', 'b', 'a'])
tm.assert_numpy_array_equal(df.values, expected)
@@ -409,10 +404,10 @@ def test_constructor_dict_of_tuples(self):
result = DataFrame(data)
expected = DataFrame(dict((k, list(v))
for k, v in compat.iteritems(data)))
- assert_frame_equal(result, expected, check_dtype=False)
+ tm.assert_frame_equal(result, expected, check_dtype=False)
def test_constructor_dict_multiindex(self):
- check = lambda result, expected: assert_frame_equal(
+ check = lambda result, expected: tm.assert_frame_equal(
result, expected, check_dtype=True, check_index_type=True,
check_column_type=True, check_names=True)
d = {('a', 'a'): {('i', 'i'): 0, ('i', 'j'): 1, ('j', 'i'): 2},
@@ -457,9 +452,9 @@ def create_data(constructor):
result_datetime64 = DataFrame(data_datetime64)
result_datetime = DataFrame(data_datetime)
result_Timestamp = DataFrame(data_Timestamp)
- assert_frame_equal(result_datetime64, expected)
- assert_frame_equal(result_datetime, expected)
- assert_frame_equal(result_Timestamp, expected)
+ tm.assert_frame_equal(result_datetime64, expected)
+ tm.assert_frame_equal(result_datetime, expected)
+ tm.assert_frame_equal(result_Timestamp, expected)
def test_constructor_dict_timedelta64_index(self):
# GH 10160
@@ -482,9 +477,9 @@ def create_data(constructor):
result_timedelta64 = DataFrame(data_timedelta64)
result_timedelta = DataFrame(data_timedelta)
result_Timedelta = DataFrame(data_Timedelta)
- assert_frame_equal(result_timedelta64, expected)
- assert_frame_equal(result_timedelta, expected)
- assert_frame_equal(result_Timedelta, expected)
+ tm.assert_frame_equal(result_timedelta64, expected)
+ tm.assert_frame_equal(result_timedelta, expected)
+ tm.assert_frame_equal(result_Timedelta, expected)
def test_constructor_period(self):
# PeriodIndex
@@ -510,7 +505,7 @@ def test_nested_dict_frame_constructor(self):
data.setdefault(col, {})[row] = df.get_value(row, col)
result = DataFrame(data, columns=rng)
- assert_frame_equal(result, df)
+ tm.assert_frame_equal(result, df)
data = {}
for col in df.columns:
@@ -518,7 +513,7 @@ def test_nested_dict_frame_constructor(self):
data.setdefault(row, {})[col] = df.get_value(row, col)
result = DataFrame(data, index=rng).T
- assert_frame_equal(result, df)
+ tm.assert_frame_equal(result, df)
def _check_basic_constructor(self, empty):
# mat: 2d matrix with shpae (3, 2) to input. empty - makes sized
@@ -542,27 +537,27 @@ def _check_basic_constructor(self, empty):
# wrong size axis labels
msg = r'Shape of passed values is \(3, 2\), indices imply \(3, 1\)'
- with assertRaisesRegexp(ValueError, msg):
+ with tm.assertRaisesRegexp(ValueError, msg):
DataFrame(mat, columns=['A', 'B', 'C'], index=[1])
msg = r'Shape of passed values is \(3, 2\), indices imply \(2, 2\)'
- with assertRaisesRegexp(ValueError, msg):
+ with tm.assertRaisesRegexp(ValueError, msg):
DataFrame(mat, columns=['A', 'B'], index=[1, 2])
# higher dim raise exception
- with assertRaisesRegexp(ValueError, 'Must pass 2-d input'):
+ with tm.assertRaisesRegexp(ValueError, 'Must pass 2-d input'):
DataFrame(empty((3, 3, 3)), columns=['A', 'B', 'C'],
index=[1])
# automatic labeling
frame = DataFrame(mat)
- self.assert_numpy_array_equal(frame.index, lrange(2))
- self.assert_numpy_array_equal(frame.columns, lrange(3))
+ self.assert_index_equal(frame.index, pd.Index(lrange(2)))
+ self.assert_index_equal(frame.columns, pd.Index(lrange(3)))
frame = DataFrame(mat, index=[1, 2])
- self.assert_numpy_array_equal(frame.columns, lrange(3))
+ self.assert_index_equal(frame.columns, pd.Index(lrange(3)))
frame = DataFrame(mat, columns=['A', 'B', 'C'])
- self.assert_numpy_array_equal(frame.index, lrange(2))
+ self.assert_index_equal(frame.index, pd.Index(lrange(2)))
# 0-length axis
frame = DataFrame(empty((0, 3)))
@@ -664,7 +659,7 @@ def test_constructor_mrecarray(self):
# Ensure mrecarray produces frame identical to dict of masked arrays
# from GH3479
- assert_fr_equal = functools.partial(assert_frame_equal,
+ assert_fr_equal = functools.partial(tm.assert_frame_equal,
check_index_type=True,
check_column_type=True,
check_frame_type=True)
@@ -738,13 +733,13 @@ def test_constructor_arrays_and_scalars(self):
df = DataFrame({'a': randn(10), 'b': True})
exp = DataFrame({'a': df['a'].values, 'b': [True] * 10})
- assert_frame_equal(df, exp)
+ tm.assert_frame_equal(df, exp)
with tm.assertRaisesRegexp(ValueError, 'must pass an index'):
DataFrame({'a': False, 'b': True})
def test_constructor_DataFrame(self):
df = DataFrame(self.frame)
- assert_frame_equal(df, self.frame)
+ tm.assert_frame_equal(df, self.frame)
df_casted = DataFrame(self.frame, dtype=np.int64)
self.assertEqual(df_casted.values.dtype, np.int64)
@@ -772,17 +767,17 @@ def test_constructor_more(self):
# corner, silly
# TODO: Fix this Exception to be better...
- with assertRaisesRegexp(PandasError, 'constructor not '
- 'properly called'):
+ with tm.assertRaisesRegexp(PandasError, 'constructor not '
+ 'properly called'):
DataFrame((1, 2, 3))
# can't cast
mat = np.array(['foo', 'bar'], dtype=object).reshape(2, 1)
- with assertRaisesRegexp(ValueError, 'cast'):
+ with tm.assertRaisesRegexp(ValueError, 'cast'):
DataFrame(mat, index=[0, 1], columns=[0], dtype=float)
dm = DataFrame(DataFrame(self.frame._series))
- assert_frame_equal(dm, self.frame)
+ tm.assert_frame_equal(dm, self.frame)
# int cast
dm = DataFrame({'A': np.ones(10, dtype=int),
@@ -795,12 +790,12 @@ def test_constructor_more(self):
def test_constructor_empty_list(self):
df = DataFrame([], index=[])
expected = DataFrame(index=[])
- assert_frame_equal(df, expected)
+ tm.assert_frame_equal(df, expected)
# GH 9939
df = DataFrame([], columns=['A', 'B'])
expected = DataFrame({}, columns=['A', 'B'])
- assert_frame_equal(df, expected)
+ tm.assert_frame_equal(df, expected)
# Empty generator: list(empty_gen()) == []
def empty_gen():
@@ -808,7 +803,7 @@ def empty_gen():
yield
df = DataFrame(empty_gen(), columns=['A', 'B'])
- assert_frame_equal(df, expected)
+ tm.assert_frame_equal(df, expected)
def test_constructor_list_of_lists(self):
# GH #484
@@ -822,7 +817,7 @@ def test_constructor_list_of_lists(self):
expected = DataFrame({0: range(10)})
data = [np.array(x) for x in range(10)]
result = DataFrame(data)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_constructor_sequence_like(self):
# GH 3783
@@ -844,25 +839,25 @@ def __len__(self, n):
columns = ["num", "str"]
result = DataFrame(l, columns=columns)
expected = DataFrame([[1, 'a'], [2, 'b']], columns=columns)
- assert_frame_equal(result, expected, check_dtype=False)
+ tm.assert_frame_equal(result, expected, check_dtype=False)
# GH 4297
# support Array
import array
result = DataFrame.from_items([('A', array.array('i', range(10)))])
expected = DataFrame({'A': list(range(10))})
- assert_frame_equal(result, expected, check_dtype=False)
+ tm.assert_frame_equal(result, expected, check_dtype=False)
expected = DataFrame([list(range(10)), list(range(10))])
result = DataFrame([array.array('i', range(10)),
array.array('i', range(10))])
- assert_frame_equal(result, expected, check_dtype=False)
+ tm.assert_frame_equal(result, expected, check_dtype=False)
def test_constructor_iterator(self):
expected = DataFrame([list(range(10)), list(range(10))])
result = DataFrame([range(10), range(10)])
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_constructor_generator(self):
# related #2305
@@ -872,12 +867,12 @@ def test_constructor_generator(self):
expected = DataFrame([list(range(10)), list(range(10))])
result = DataFrame([gen1, gen2])
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
gen = ([i, 'a'] for i in range(10))
result = DataFrame(gen)
expected = DataFrame({0: range(10), 1: 'a'})
- assert_frame_equal(result, expected, check_dtype=False)
+ tm.assert_frame_equal(result, expected, check_dtype=False)
def test_constructor_list_of_dicts(self):
data = [OrderedDict([['a', 1.5], ['b', 3], ['c', 4], ['d', 6]]),
@@ -890,11 +885,11 @@ def test_constructor_list_of_dicts(self):
result = DataFrame(data)
expected = DataFrame.from_dict(dict(zip(range(len(data)), data)),
orient='index')
- assert_frame_equal(result, expected.reindex(result.index))
+ tm.assert_frame_equal(result, expected.reindex(result.index))
result = DataFrame([{}])
expected = DataFrame(index=[0])
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_constructor_list_of_series(self):
data = [OrderedDict([['a', 1.5], ['b', 3.0], ['c', 4.0]]),
@@ -907,7 +902,7 @@ def test_constructor_list_of_series(self):
Series([1.5, 3, 6], idx, name='y')]
result = DataFrame(data2)
expected = DataFrame.from_dict(sdict, orient='index')
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# some unnamed
data2 = [Series([1.5, 3, 4], idx, dtype='O', name='x'),
@@ -916,7 +911,7 @@ def test_constructor_list_of_series(self):
sdict = OrderedDict(zip(['x', 'Unnamed 0'], data))
expected = DataFrame.from_dict(sdict, orient='index')
- assert_frame_equal(result.sort_index(), expected)
+ tm.assert_frame_equal(result.sort_index(), expected)
# none named
data = [OrderedDict([['a', 1.5], ['b', 3], ['c', 4], ['d', 6]]),
@@ -930,14 +925,14 @@ def test_constructor_list_of_series(self):
result = DataFrame(data)
sdict = OrderedDict(zip(range(len(data)), data))
expected = DataFrame.from_dict(sdict, orient='index')
- assert_frame_equal(result, expected.reindex(result.index))
+ tm.assert_frame_equal(result, expected.reindex(result.index))
result2 = DataFrame(data, index=np.arange(6))
- assert_frame_equal(result, result2)
+ tm.assert_frame_equal(result, result2)
result = DataFrame([Series({})])
expected = DataFrame(index=[0])
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
data = [OrderedDict([['a', 1.5], ['b', 3.0], ['c', 4.0]]),
OrderedDict([['a', 1.5], ['b', 3.0], ['c', 6.0]])]
@@ -948,7 +943,7 @@ def test_constructor_list_of_series(self):
Series([1.5, 3, 6], idx)]
result = DataFrame(data2)
expected = DataFrame.from_dict(sdict, orient='index')
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_constructor_list_of_derived_dicts(self):
class CustomDict(dict):
@@ -960,19 +955,20 @@ class CustomDict(dict):
result_custom = DataFrame(data_custom)
result = DataFrame(data)
- assert_frame_equal(result, result_custom)
+ tm.assert_frame_equal(result, result_custom)
def test_constructor_ragged(self):
data = {'A': randn(10),
'B': randn(8)}
- with assertRaisesRegexp(ValueError, 'arrays must all be same length'):
+ with tm.assertRaisesRegexp(ValueError,
+ 'arrays must all be same length'):
DataFrame(data)
def test_constructor_scalar(self):
idx = Index(lrange(3))
df = DataFrame({"a": 0}, index=idx)
expected = DataFrame({"a": [0, 0, 0]}, index=idx)
- assert_frame_equal(df, expected, check_dtype=False)
+ tm.assert_frame_equal(df, expected, check_dtype=False)
def test_constructor_Series_copy_bug(self):
df = DataFrame(self.frame['A'], index=self.frame.index, columns=['A'])
@@ -987,7 +983,7 @@ def test_constructor_mixed_dict_and_Series(self):
self.assertTrue(result.index.is_monotonic)
# ordering ambiguous, raise exception
- with assertRaisesRegexp(ValueError, 'ambiguous ordering'):
+ with tm.assertRaisesRegexp(ValueError, 'ambiguous ordering'):
DataFrame({'A': ['a', 'b'], 'B': {'a': 'a', 'b': 'b'}})
# this is OK though
@@ -995,12 +991,12 @@ def test_constructor_mixed_dict_and_Series(self):
'B': Series(['a', 'b'], index=['a', 'b'])})
expected = DataFrame({'A': ['a', 'b'], 'B': ['a', 'b']},
index=['a', 'b'])
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_constructor_tuples(self):
result = DataFrame({'A': [(1, 2), (3, 4)]})
expected = DataFrame({'A': Series([(1, 2), (3, 4)])})
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_constructor_namedtuples(self):
# GH11181
@@ -1009,43 +1005,43 @@ def test_constructor_namedtuples(self):
tuples = [named_tuple(1, 3), named_tuple(2, 4)]
expected = DataFrame({'a': [1, 2], 'b': [3, 4]})
result = DataFrame(tuples)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# with columns
expected = DataFrame({'y': [1, 2], 'z': [3, 4]})
result = DataFrame(tuples, columns=['y', 'z'])
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_constructor_orient(self):
data_dict = self.mixed_frame.T._series
recons = DataFrame.from_dict(data_dict, orient='index')
expected = self.mixed_frame.sort_index()
- assert_frame_equal(recons, expected)
+ tm.assert_frame_equal(recons, expected)
# dict of sequence
a = {'hi': [32, 3, 3],
'there': [3, 5, 3]}
rs = DataFrame.from_dict(a, orient='index')
xp = DataFrame.from_dict(a).T.reindex(list(a.keys()))
- assert_frame_equal(rs, xp)
+ tm.assert_frame_equal(rs, xp)
def test_constructor_Series_named(self):
a = Series([1, 2, 3], index=['a', 'b', 'c'], name='x')
df = DataFrame(a)
self.assertEqual(df.columns[0], 'x')
- self.assertTrue(df.index.equals(a.index))
+ self.assert_index_equal(df.index, a.index)
# ndarray like
arr = np.random.randn(10)
s = Series(arr, name='x')
df = DataFrame(s)
expected = DataFrame(dict(x=s))
- assert_frame_equal(df, expected)
+ tm.assert_frame_equal(df, expected)
s = Series(arr, index=range(3, 13))
df = DataFrame(s)
expected = DataFrame({0: s})
- assert_frame_equal(df, expected)
+ tm.assert_frame_equal(df, expected)
self.assertRaises(ValueError, DataFrame, s, columns=[1, 2])
@@ -1059,12 +1055,12 @@ def test_constructor_Series_named(self):
df = DataFrame([s1, arr]).T
expected = DataFrame({'x': s1, 'Unnamed 0': arr},
columns=['x', 'Unnamed 0'])
- assert_frame_equal(df, expected)
+ tm.assert_frame_equal(df, expected)
# this is a bit non-intuitive here; the series collapse down to arrays
df = DataFrame([arr, s1]).T
expected = DataFrame({1: s1, 0: arr}, columns=[0, 1])
- assert_frame_equal(df, expected)
+ tm.assert_frame_equal(df, expected)
def test_constructor_Series_differently_indexed(self):
# name
@@ -1078,13 +1074,13 @@ def test_constructor_Series_differently_indexed(self):
df1 = DataFrame(s1, index=other_index)
exp1 = DataFrame(s1.reindex(other_index))
self.assertEqual(df1.columns[0], 'x')
- assert_frame_equal(df1, exp1)
+ tm.assert_frame_equal(df1, exp1)
df2 = DataFrame(s2, index=other_index)
exp2 = DataFrame(s2.reindex(other_index))
self.assertEqual(df2.columns[0], 0)
- self.assertTrue(df2.index.equals(other_index))
- assert_frame_equal(df2, exp2)
+ self.assert_index_equal(df2.index, other_index)
+ tm.assert_frame_equal(df2, exp2)
def test_constructor_manager_resize(self):
index = list(self.frame.index[:5])
@@ -1092,17 +1088,17 @@ def test_constructor_manager_resize(self):
result = DataFrame(self.frame._data, index=index,
columns=columns)
- self.assert_numpy_array_equal(result.index, index)
- self.assert_numpy_array_equal(result.columns, columns)
+ self.assert_index_equal(result.index, Index(index))
+ self.assert_index_equal(result.columns, Index(columns))
def test_constructor_from_items(self):
items = [(c, self.frame[c]) for c in self.frame.columns]
recons = DataFrame.from_items(items)
- assert_frame_equal(recons, self.frame)
+ tm.assert_frame_equal(recons, self.frame)
# pass some columns
recons = DataFrame.from_items(items, columns=['C', 'B', 'A'])
- assert_frame_equal(recons, self.frame.ix[:, ['C', 'B', 'A']])
+ tm.assert_frame_equal(recons, self.frame.ix[:, ['C', 'B', 'A']])
# orient='index'
@@ -1112,7 +1108,7 @@ def test_constructor_from_items(self):
recons = DataFrame.from_items(row_items,
columns=self.mixed_frame.columns,
orient='index')
- assert_frame_equal(recons, self.mixed_frame)
+ tm.assert_frame_equal(recons, self.mixed_frame)
self.assertEqual(recons['A'].dtype, np.float64)
with tm.assertRaisesRegexp(TypeError,
@@ -1128,7 +1124,7 @@ def test_constructor_from_items(self):
recons = DataFrame.from_items(row_items,
columns=self.mixed_frame.columns,
orient='index')
- assert_frame_equal(recons, self.mixed_frame)
+ tm.assert_frame_equal(recons, self.mixed_frame)
tm.assertIsInstance(recons['foo'][0], tuple)
rs = DataFrame.from_items([('A', [1, 2, 3]), ('B', [4, 5, 6])],
@@ -1136,12 +1132,12 @@ def test_constructor_from_items(self):
columns=['one', 'two', 'three'])
xp = DataFrame([[1, 2, 3], [4, 5, 6]], index=['A', 'B'],
columns=['one', 'two', 'three'])
- assert_frame_equal(rs, xp)
+ tm.assert_frame_equal(rs, xp)
def test_constructor_mix_series_nonseries(self):
df = DataFrame({'A': self.frame['A'],
'B': list(self.frame['B'])}, columns=['A', 'B'])
- assert_frame_equal(df, self.frame.ix[:, ['A', 'B']])
+ tm.assert_frame_equal(df, self.frame.ix[:, ['A', 'B']])
with tm.assertRaisesRegexp(ValueError, 'does not match index length'):
DataFrame({'A': self.frame['A'], 'B': list(self.frame['B'])[:-2]})
@@ -1149,10 +1145,10 @@ def test_constructor_mix_series_nonseries(self):
def test_constructor_miscast_na_int_dtype(self):
df = DataFrame([[np.nan, 1], [1, 0]], dtype=np.int64)
expected = DataFrame([[np.nan, 1], [1, 0]])
- assert_frame_equal(df, expected)
+ tm.assert_frame_equal(df, expected)
def test_constructor_iterator_failure(self):
- with assertRaisesRegexp(TypeError, 'iterator'):
+ with tm.assertRaisesRegexp(TypeError, 'iterator'):
df = DataFrame(iter([1, 2, 3])) # noqa
def test_constructor_column_duplicates(self):
@@ -1161,11 +1157,11 @@ def test_constructor_column_duplicates(self):
edf = DataFrame([[8, 5]])
edf.columns = ['a', 'a']
- assert_frame_equal(df, edf)
+ tm.assert_frame_equal(df, edf)
idf = DataFrame.from_items(
[('a', [8]), ('a', [5])], columns=['a', 'a'])
- assert_frame_equal(idf, edf)
+ tm.assert_frame_equal(idf, edf)
self.assertRaises(ValueError, DataFrame.from_items,
[('a', [8]), ('a', [5]), ('b', [6])],
@@ -1176,30 +1172,29 @@ def test_constructor_empty_with_string_dtype(self):
expected = DataFrame(index=[0, 1], columns=[0, 1], dtype=object)
df = DataFrame(index=[0, 1], columns=[0, 1], dtype=str)
- assert_frame_equal(df, expected)
+ tm.assert_frame_equal(df, expected)
df = DataFrame(index=[0, 1], columns=[0, 1], dtype=np.str_)
- assert_frame_equal(df, expected)
+ tm.assert_frame_equal(df, expected)
df = DataFrame(index=[0, 1], columns=[0, 1], dtype=np.unicode_)
- assert_frame_equal(df, expected)
+ tm.assert_frame_equal(df, expected)
df = DataFrame(index=[0, 1], columns=[0, 1], dtype='U5')
- assert_frame_equal(df, expected)
+ tm.assert_frame_equal(df, expected)
def test_constructor_single_value(self):
# expecting single value upcasting here
df = DataFrame(0., index=[1, 2, 3], columns=['a', 'b', 'c'])
- assert_frame_equal(df, DataFrame(np.zeros(df.shape).astype('float64'),
- df.index, df.columns))
+ tm.assert_frame_equal(df,
+ DataFrame(np.zeros(df.shape).astype('float64'),
+ df.index, df.columns))
df = DataFrame(0, index=[1, 2, 3], columns=['a', 'b', 'c'])
- assert_frame_equal(df, DataFrame(np.zeros(df.shape).astype('int64'),
- df.index, df.columns))
+ tm.assert_frame_equal(df, DataFrame(np.zeros(df.shape).astype('int64'),
+ df.index, df.columns))
df = DataFrame('a', index=[1, 2], columns=['a', 'c'])
- assert_frame_equal(df, DataFrame(np.array([['a', 'a'],
- ['a', 'a']],
- dtype=object),
- index=[1, 2],
- columns=['a', 'c']))
+ tm.assert_frame_equal(df, DataFrame(np.array([['a', 'a'], ['a', 'a']],
+ dtype=object),
+ index=[1, 2], columns=['a', 'c']))
self.assertRaises(com.PandasError, DataFrame, 'a', [1, 2])
self.assertRaises(com.PandasError, DataFrame, 'a', columns=['a', 'c'])
@@ -1221,7 +1216,7 @@ def test_constructor_with_datetimes(self):
expected = Series({'int64': 1, datetime64name: 2, objectname: 2})
result.sort_index()
expected.sort_index()
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
# check with ndarray construction ndim==0 (e.g. we are passing a ndim 0
# ndarray with a dtype specified)
@@ -1245,7 +1240,7 @@ def test_constructor_with_datetimes(self):
result.sort_index()
expected = Series(expected)
expected.sort_index()
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
# check with ndarray construction ndim>0
df = DataFrame({'a': 1., 'b': 2, 'c': 'foo',
@@ -1254,7 +1249,7 @@ def test_constructor_with_datetimes(self):
index=np.arange(10))
result = df.get_dtype_counts()
result.sort_index()
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
# GH 2809
ind = date_range(start="2000-01-01", freq="D", periods=10)
@@ -1266,7 +1261,7 @@ def test_constructor_with_datetimes(self):
expected = Series({datetime64name: 1})
result.sort_index()
expected.sort_index()
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
# GH 2810
ind = date_range(start="2000-01-01", freq="D", periods=10)
@@ -1277,7 +1272,7 @@ def test_constructor_with_datetimes(self):
expected = Series({datetime64name: 1, objectname: 1})
result.sort_index()
expected.sort_index()
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
# GH 7594
# don't coerce tz-aware
@@ -1287,12 +1282,12 @@ def test_constructor_with_datetimes(self):
df = DataFrame({'End Date': dt}, index=[0])
self.assertEqual(df.iat[0, 0], dt)
- assert_series_equal(df.dtypes, Series(
+ tm.assert_series_equal(df.dtypes, Series(
{'End Date': 'datetime64[ns, US/Eastern]'}))
df = DataFrame([{'End Date': dt}])
self.assertEqual(df.iat[0, 0], dt)
- assert_series_equal(df.dtypes, Series(
+ tm.assert_series_equal(df.dtypes, Series(
{'End Date': 'datetime64[ns, US/Eastern]'}))
# tz-aware (UTC and other tz's)
@@ -1315,17 +1310,17 @@ def test_constructor_with_datetimes(self):
{'a': i.to_series(keep_tz=True).reset_index(drop=True)})
df = DataFrame()
df['a'] = i
- assert_frame_equal(df, expected)
+ tm.assert_frame_equal(df, expected)
df = DataFrame({'a': i})
- assert_frame_equal(df, expected)
+ tm.assert_frame_equal(df, expected)
# multiples
i_no_tz = date_range('1/1/2011', periods=5, freq='10s')
df = DataFrame({'a': i, 'b': i_no_tz})
expected = DataFrame({'a': i.to_series(keep_tz=True)
.reset_index(drop=True), 'b': i_no_tz})
- assert_frame_equal(df, expected)
+ tm.assert_frame_equal(df, expected)
def test_constructor_for_list_with_dtypes(self):
# TODO(wesm): unused
@@ -1348,39 +1343,39 @@ def test_constructor_for_list_with_dtypes(self):
df = DataFrame({'a': [2 ** 31, 2 ** 31 + 1]})
result = df.get_dtype_counts()
expected = Series({'int64': 1})
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
# GH #2751 (construction with no index specified), make sure we cast to
# platform values
df = DataFrame([1, 2])
result = df.get_dtype_counts()
expected = Series({'int64': 1})
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
df = DataFrame([1., 2.])
result = df.get_dtype_counts()
expected = Series({'float64': 1})
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
df = DataFrame({'a': [1, 2]})
result = df.get_dtype_counts()
expected = Series({'int64': 1})
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
df = DataFrame({'a': [1., 2.]})
result = df.get_dtype_counts()
expected = Series({'float64': 1})
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
df = DataFrame({'a': 1}, index=lrange(3))
result = df.get_dtype_counts()
expected = Series({'int64': 1})
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
df = DataFrame({'a': 1.}, index=lrange(3))
result = df.get_dtype_counts()
expected = Series({'float64': 1})
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
# with object list
df = DataFrame({'a': [1, 2, 4, 7], 'b': [1.2, 2.3, 5.1, 6.3],
@@ -1392,7 +1387,7 @@ def test_constructor_for_list_with_dtypes(self):
{'int64': 1, 'float64': 2, datetime64name: 1, objectname: 1})
result.sort_index()
expected.sort_index()
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_constructor_frame_copy(self):
cop = DataFrame(self.frame, copy=True)
@@ -1430,7 +1425,8 @@ def check(df):
indexer = np.arange(len(df.columns))[isnull(df.columns)]
if len(indexer) == 1:
- assert_series_equal(df.iloc[:, indexer[0]], df.loc[:, np.nan])
+ tm.assert_series_equal(df.iloc[:, indexer[0]],
+ df.loc[:, np.nan])
# multiple nans should fail
else:
@@ -1467,17 +1463,17 @@ def test_from_records_to_records(self):
# TODO(wesm): unused
frame = DataFrame.from_records(arr) # noqa
- index = np.arange(len(arr))[::-1]
+ index = pd.Index(np.arange(len(arr))[::-1])
indexed_frame = DataFrame.from_records(arr, index=index)
- self.assert_numpy_array_equal(indexed_frame.index, index)
+ self.assert_index_equal(indexed_frame.index, index)
# without names, it should go to last ditch
arr2 = np.zeros((2, 3))
- assert_frame_equal(DataFrame.from_records(arr2), DataFrame(arr2))
+ tm.assert_frame_equal(DataFrame.from_records(arr2), DataFrame(arr2))
# wrong length
msg = r'Shape of passed values is \(3, 2\), indices imply \(3, 1\)'
- with assertRaisesRegexp(ValueError, msg):
+ with tm.assertRaisesRegexp(ValueError, msg):
DataFrame.from_records(arr, index=index[:-1])
indexed_frame = DataFrame.from_records(arr, index='f1')
@@ -1508,14 +1504,14 @@ def test_from_records_iterator(self):
'u': np.array([1.0, 3.0], dtype=np.float32),
'y': np.array([2, 4], dtype=np.int64),
'z': np.array([2, 4], dtype=np.int32)})
- assert_frame_equal(df.reindex_like(xp), xp)
+ tm.assert_frame_equal(df.reindex_like(xp), xp)
# no dtypes specified here, so just compare with the default
arr = [(1.0, 2), (3.0, 4), (5., 6), (7., 8)]
df = DataFrame.from_records(iter(arr), columns=['x', 'y'],
nrows=2)
- assert_frame_equal(df, xp.reindex(
- columns=['x', 'y']), check_dtype=False)
+ tm.assert_frame_equal(df, xp.reindex(columns=['x', 'y']),
+ check_dtype=False)
def test_from_records_tuples_generator(self):
def tuple_generator(length):
@@ -1532,7 +1528,7 @@ def tuple_generator(length):
generator = tuple_generator(10)
result = DataFrame.from_records(generator, columns=columns_names)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_from_records_lists_generator(self):
def list_generator(length):
@@ -1549,7 +1545,7 @@ def list_generator(length):
generator = list_generator(10)
result = DataFrame.from_records(generator, columns=columns_names)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_from_records_columns_not_modified(self):
tuples = [(1, 2, 3),
@@ -1582,7 +1578,7 @@ def test_from_records_duplicates(self):
expected = DataFrame([(1, 2, 3), (4, 5, 6)],
columns=['a', 'b', 'a'])
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_from_records_set_index_name(self):
def create_dict(order_id):
@@ -1607,7 +1603,7 @@ def test_from_records_misc_brokenness(self):
result = DataFrame.from_records(data, columns=['a', 'b'])
exp = DataFrame(data, columns=['a', 'b'])
- assert_frame_equal(result, exp)
+ tm.assert_frame_equal(result, exp)
# overlap in index/index_names
@@ -1615,7 +1611,7 @@ def test_from_records_misc_brokenness(self):
result = DataFrame.from_records(data, index=['a', 'b', 'c'])
exp = DataFrame(data, index=['a', 'b', 'c'])
- assert_frame_equal(result, exp)
+ tm.assert_frame_equal(result, exp)
# GH 2623
rows = []
@@ -1631,28 +1627,28 @@ def test_from_records_misc_brokenness(self):
df2_obj = DataFrame.from_records(rows, columns=['date', 'test'])
results = df2_obj.get_dtype_counts()
expected = Series({'datetime64[ns]': 1, 'int64': 1})
- assert_series_equal(results, expected)
+ tm.assert_series_equal(results, expected)
def test_from_records_empty(self):
# 3562
result = DataFrame.from_records([], columns=['a', 'b', 'c'])
expected = DataFrame(columns=['a', 'b', 'c'])
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = DataFrame.from_records([], columns=['a', 'b', 'b'])
expected = DataFrame(columns=['a', 'b', 'b'])
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_from_records_empty_with_nonempty_fields_gh3682(self):
a = np.array([(1, 2)], dtype=[('id', np.int64), ('value', np.int64)])
df = DataFrame.from_records(a, index='id')
- assert_numpy_array_equal(df.index, Index([1], name='id'))
+ tm.assert_index_equal(df.index, Index([1], name='id'))
self.assertEqual(df.index.name, 'id')
- assert_numpy_array_equal(df.columns, Index(['value']))
+ tm.assert_index_equal(df.columns, Index(['value']))
b = np.array([], dtype=[('id', np.int64), ('value', np.int64)])
df = DataFrame.from_records(b, index='id')
- assert_numpy_array_equal(df.index, Index([], name='id'))
+ tm.assert_index_equal(df.index, Index([], name='id'))
self.assertEqual(df.index.name, 'id')
def test_from_records_with_datetimes(self):
@@ -1675,14 +1671,14 @@ def test_from_records_with_datetimes(self):
raise nose.SkipTest("known failure of numpy rec array creation")
result = DataFrame.from_records(recarray)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
# coercion should work too
arrdata = [np.array([datetime(2005, 3, 1, 0, 0), None])]
dtypes = [('EXPIRY', '<M8[m]')]
recarray = np.core.records.fromarrays(arrdata, dtype=dtypes)
result = DataFrame.from_records(recarray)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_from_records_sequencelike(self):
df = DataFrame({'A': np.array(np.random.randn(6), dtype=np.float64),
@@ -1728,14 +1724,14 @@ def test_from_records_sequencelike(self):
result4 = (DataFrame.from_records(lists, columns=columns)
.reindex(columns=df.columns))
- assert_frame_equal(result, df, check_dtype=False)
- assert_frame_equal(result2, df)
- assert_frame_equal(result3, df)
- assert_frame_equal(result4, df, check_dtype=False)
+ tm.assert_frame_equal(result, df, check_dtype=False)
+ tm.assert_frame_equal(result2, df)
+ tm.assert_frame_equal(result3, df)
+ tm.assert_frame_equal(result4, df, check_dtype=False)
# tuples is in the order of the columns
result = DataFrame.from_records(tuples)
- self.assert_numpy_array_equal(result.columns, lrange(8))
+ tm.assert_index_equal(result.columns, pd.Index(lrange(8)))
# test exclude parameter & we are casting the results here (as we don't
# have dtype info to recover)
@@ -1744,13 +1740,14 @@ def test_from_records_sequencelike(self):
exclude = list(set(range(8)) - set(columns_to_test))
result = DataFrame.from_records(tuples, exclude=exclude)
result.columns = [columns[i] for i in sorted(columns_to_test)]
- assert_series_equal(result['C'], df['C'])
- assert_series_equal(result['E1'], df['E1'].astype('float64'))
+ tm.assert_series_equal(result['C'], df['C'])
+ tm.assert_series_equal(result['E1'], df['E1'].astype('float64'))
# empty case
result = DataFrame.from_records([], columns=['foo', 'bar', 'baz'])
self.assertEqual(len(result), 0)
- self.assert_numpy_array_equal(result.columns, ['foo', 'bar', 'baz'])
+ self.assert_index_equal(result.columns,
+ pd.Index(['foo', 'bar', 'baz']))
result = DataFrame.from_records([])
self.assertEqual(len(result), 0)
@@ -1787,24 +1784,24 @@ def test_from_records_dictlike(self):
.reindex(columns=df.columns))
for r in results:
- assert_frame_equal(r, df)
+ tm.assert_frame_equal(r, df)
def test_from_records_with_index_data(self):
df = DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C'])
data = np.random.randn(10)
df1 = DataFrame.from_records(df, index=data)
- assert(df1.index.equals(Index(data)))
+ tm.assert_index_equal(df1.index, Index(data))
def test_from_records_bad_index_column(self):
df = DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C'])
# should pass
df1 = DataFrame.from_records(df, index=['C'])
- assert(df1.index.equals(Index(df.C)))
+ tm.assert_index_equal(df1.index, Index(df.C))
df1 = DataFrame.from_records(df, index='C')
- assert(df1.index.equals(Index(df.C)))
+ tm.assert_index_equal(df1.index, Index(df.C))
# should fail
self.assertRaises(ValueError, DataFrame.from_records, df, index=[2])
@@ -1827,7 +1824,7 @@ def __iter__(self):
result = DataFrame.from_records(recs)
expected = DataFrame.from_records(tups)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_from_records_len0_with_columns(self):
# #2633
@@ -1856,20 +1853,20 @@ def test_from_dict(self):
df = DataFrame({'A': idx, 'B': dr})
self.assertTrue(df['A'].dtype, 'M8[ns, US/Eastern')
self.assertTrue(df['A'].name == 'A')
- assert_series_equal(df['A'], Series(idx, name='A'))
- assert_series_equal(df['B'], Series(dr, name='B'))
+ tm.assert_series_equal(df['A'], Series(idx, name='A'))
+ tm.assert_series_equal(df['B'], Series(dr, name='B'))
def test_from_index(self):
# from index
idx2 = date_range('20130101', periods=3, tz='US/Eastern', name='foo')
df2 = DataFrame(idx2)
- assert_series_equal(df2['foo'], Series(idx2, name='foo'))
+ tm.assert_series_equal(df2['foo'], Series(idx2, name='foo'))
df2 = DataFrame(Series(idx2))
- assert_series_equal(df2['foo'], Series(idx2, name='foo'))
+ tm.assert_series_equal(df2['foo'], Series(idx2, name='foo'))
idx2 = date_range('20130101', periods=3, tz='US/Eastern')
df2 = DataFrame(idx2)
- assert_series_equal(df2[0], Series(idx2, name=0))
+ tm.assert_series_equal(df2[0], Series(idx2, name=0))
df2 = DataFrame(Series(idx2))
- assert_series_equal(df2[0], Series(idx2, name=0))
+ tm.assert_series_equal(df2[0], Series(idx2, name=0))
diff --git a/pandas/tests/frame/test_convert_to.py b/pandas/tests/frame/test_convert_to.py
index cf35372319c85..53083a602e183 100644
--- a/pandas/tests/frame/test_convert_to.py
+++ b/pandas/tests/frame/test_convert_to.py
@@ -42,19 +42,18 @@ def test_to_dict(self):
self.assertEqual(v2, recons_data[k][k2])
recons_data = DataFrame(test_data).to_dict("sp")
-
expected_split = {'columns': ['A', 'B'], 'index': ['1', '2', '3'],
'data': [[1.0, '1'], [2.0, '2'], [nan, '3']]}
-
- tm.assert_almost_equal(recons_data, expected_split)
+ tm.assert_dict_equal(recons_data, expected_split)
recons_data = DataFrame(test_data).to_dict("r")
-
expected_records = [{'A': 1.0, 'B': '1'},
{'A': 2.0, 'B': '2'},
{'A': nan, 'B': '3'}]
-
- tm.assert_almost_equal(recons_data, expected_records)
+ tm.assertIsInstance(recons_data, list)
+ self.assertEqual(len(recons_data), 3)
+ for l, r in zip(recons_data, expected_records):
+ tm.assert_dict_equal(l, r)
# GH10844
recons_data = DataFrame(test_data).to_dict("i")
@@ -78,24 +77,24 @@ def test_to_dict_timestamp(self):
expected_records_mixed = [{'A': tsmp, 'B': 1},
{'A': tsmp, 'B': 2}]
- tm.assert_almost_equal(test_data.to_dict(
- orient='records'), expected_records)
- tm.assert_almost_equal(test_data_mixed.to_dict(
- orient='records'), expected_records_mixed)
+ self.assertEqual(test_data.to_dict(orient='records'),
+ expected_records)
+ self.assertEqual(test_data_mixed.to_dict(orient='records'),
+ expected_records_mixed)
expected_series = {
- 'A': Series([tsmp, tsmp]),
- 'B': Series([tsmp, tsmp]),
+ 'A': Series([tsmp, tsmp], name='A'),
+ 'B': Series([tsmp, tsmp], name='B'),
}
expected_series_mixed = {
- 'A': Series([tsmp, tsmp]),
- 'B': Series([1, 2]),
+ 'A': Series([tsmp, tsmp], name='A'),
+ 'B': Series([1, 2], name='B'),
}
- tm.assert_almost_equal(test_data.to_dict(
- orient='series'), expected_series)
- tm.assert_almost_equal(test_data_mixed.to_dict(
- orient='series'), expected_series_mixed)
+ tm.assert_dict_equal(test_data.to_dict(orient='series'),
+ expected_series)
+ tm.assert_dict_equal(test_data_mixed.to_dict(orient='series'),
+ expected_series_mixed)
expected_split = {
'index': [0, 1],
@@ -110,10 +109,10 @@ def test_to_dict_timestamp(self):
'columns': ['A', 'B']
}
- tm.assert_almost_equal(test_data.to_dict(
- orient='split'), expected_split)
- tm.assert_almost_equal(test_data_mixed.to_dict(
- orient='split'), expected_split_mixed)
+ tm.assert_dict_equal(test_data.to_dict(orient='split'),
+ expected_split)
+ tm.assert_dict_equal(test_data_mixed.to_dict(orient='split'),
+ expected_split_mixed)
def test_to_dict_invalid_orient(self):
df = DataFrame({'A': [0, 1]})
diff --git a/pandas/tests/frame/test_dtypes.py b/pandas/tests/frame/test_dtypes.py
index 064230bde791a..5f95ff6b6b601 100644
--- a/pandas/tests/frame/test_dtypes.py
+++ b/pandas/tests/frame/test_dtypes.py
@@ -496,12 +496,13 @@ def test_astype(self):
def test_astype_str(self):
# str formatting
result = self.tzframe.astype(str)
- expected = np.array([['2013-01-01', '2013-01-01 00:00:00-05:00',
- '2013-01-01 00:00:00+01:00'],
- ['2013-01-02', 'NaT', 'NaT'],
- ['2013-01-03', '2013-01-03 00:00:00-05:00',
- '2013-01-03 00:00:00+01:00']], dtype=object)
- self.assert_numpy_array_equal(result, expected)
+ expected = DataFrame([['2013-01-01', '2013-01-01 00:00:00-05:00',
+ '2013-01-01 00:00:00+01:00'],
+ ['2013-01-02', 'NaT', 'NaT'],
+ ['2013-01-03', '2013-01-03 00:00:00-05:00',
+ '2013-01-03 00:00:00+01:00']],
+ columns=self.tzframe.columns)
+ self.assert_frame_equal(result, expected)
result = str(self.tzframe)
self.assertTrue('0 2013-01-01 2013-01-01 00:00:00-05:00 '
diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py
index fc8456cb59840..78354f32acbda 100644
--- a/pandas/tests/frame/test_indexing.py
+++ b/pandas/tests/frame/test_indexing.py
@@ -216,7 +216,7 @@ def test_getitem_boolean(self):
subindex = self.tsframe.index[indexer]
subframe = self.tsframe[indexer]
- self.assert_numpy_array_equal(subindex, subframe.index)
+ self.assert_index_equal(subindex, subframe.index)
with assertRaisesRegexp(ValueError, 'Item wrong length'):
self.tsframe[indexer[:-1]]
@@ -1200,7 +1200,7 @@ def test_getitem_fancy_scalar(self):
for col in f.columns:
ts = f[col]
for idx in f.index[::5]:
- assert_almost_equal(ix[idx, col], ts[idx])
+ self.assertEqual(ix[idx, col], ts[idx])
def test_setitem_fancy_scalar(self):
f = self.frame
@@ -1392,7 +1392,7 @@ def test_setitem_single_column_mixed(self):
columns=['foo', 'bar', 'baz'])
df['str'] = 'qux'
df.ix[::2, 'str'] = nan
- expected = [nan, 'qux', nan, 'qux', nan]
+ expected = np.array([nan, 'qux', nan, 'qux', nan], dtype=object)
assert_almost_equal(df['str'].values, expected)
def test_setitem_single_column_mixed_datetime(self):
@@ -1546,21 +1546,21 @@ def test_get_value(self):
for col in self.frame.columns:
result = self.frame.get_value(idx, col)
expected = self.frame[col][idx]
- assert_almost_equal(result, expected)
+ self.assertEqual(result, expected)
def test_lookup(self):
- def alt(df, rows, cols):
+ def alt(df, rows, cols, dtype):
result = []
for r, c in zip(rows, cols):
result.append(df.get_value(r, c))
- return result
+ return np.array(result, dtype=dtype)
def testit(df):
rows = list(df.index) * len(df.columns)
cols = list(df.columns) * len(df.index)
result = df.lookup(rows, cols)
- expected = alt(df, rows, cols)
- assert_almost_equal(result, expected)
+ expected = alt(df, rows, cols, dtype=np.object_)
+ tm.assert_almost_equal(result, expected, check_dtype=False)
testit(self.mixed_frame)
testit(self.frame)
@@ -1570,7 +1570,7 @@ def testit(df):
'mask_b': [True, False, False, False],
'mask_c': [False, True, False, True]})
df['mask'] = df.lookup(df.index, 'mask_' + df['label'])
- exp_mask = alt(df, df.index, 'mask_' + df['label'])
+ exp_mask = alt(df, df.index, 'mask_' + df['label'], dtype=np.bool_)
tm.assert_series_equal(df['mask'], pd.Series(exp_mask, name='mask'))
self.assertEqual(df['mask'].dtype, np.bool_)
@@ -1587,7 +1587,7 @@ def test_set_value(self):
for idx in self.frame.index:
for col in self.frame.columns:
self.frame.set_value(idx, col, 1)
- assert_almost_equal(self.frame[col][idx], 1)
+ self.assertEqual(self.frame[col][idx], 1)
def test_set_value_resize(self):
@@ -1777,7 +1777,7 @@ def test_iget_value(self):
for j, col in enumerate(self.frame.columns):
result = self.frame.iat[i, j]
expected = self.frame.at[row, col]
- assert_almost_equal(result, expected)
+ self.assertEqual(result, expected)
def test_nested_exception(self):
# Ignore the strange way of triggering the problem
diff --git a/pandas/tests/frame/test_misc_api.py b/pandas/tests/frame/test_misc_api.py
index 48b8d641a0f98..03b3c0a5e65d0 100644
--- a/pandas/tests/frame/test_misc_api.py
+++ b/pandas/tests/frame/test_misc_api.py
@@ -58,7 +58,7 @@ def test_get_value(self):
for col in self.frame.columns:
result = self.frame.get_value(idx, col)
expected = self.frame[col][idx]
- assert_almost_equal(result, expected)
+ tm.assert_almost_equal(result, expected)
def test_join_index(self):
# left / right
@@ -67,15 +67,15 @@ def test_join_index(self):
f2 = self.frame.reindex(columns=['C', 'D'])
joined = f.join(f2)
- self.assertTrue(f.index.equals(joined.index))
+ self.assert_index_equal(f.index, joined.index)
self.assertEqual(len(joined.columns), 4)
joined = f.join(f2, how='left')
- self.assertTrue(joined.index.equals(f.index))
+ self.assert_index_equal(joined.index, f.index)
self.assertEqual(len(joined.columns), 4)
joined = f.join(f2, how='right')
- self.assertTrue(joined.index.equals(f2.index))
+ self.assert_index_equal(joined.index, f2.index)
self.assertEqual(len(joined.columns), 4)
# inner
@@ -84,7 +84,7 @@ def test_join_index(self):
f2 = self.frame.reindex(columns=['C', 'D'])
joined = f.join(f2, how='inner')
- self.assertTrue(joined.index.equals(f.index.intersection(f2.index)))
+ self.assert_index_equal(joined.index, f.index.intersection(f2.index))
self.assertEqual(len(joined.columns), 4)
# outer
@@ -148,12 +148,12 @@ def test_join_overlap(self):
def test_add_prefix_suffix(self):
with_prefix = self.frame.add_prefix('foo#')
- expected = ['foo#%s' % c for c in self.frame.columns]
- self.assert_numpy_array_equal(with_prefix.columns, expected)
+ expected = pd.Index(['foo#%s' % c for c in self.frame.columns])
+ self.assert_index_equal(with_prefix.columns, expected)
with_suffix = self.frame.add_suffix('#foo')
- expected = ['%s#foo' % c for c in self.frame.columns]
- self.assert_numpy_array_equal(with_suffix.columns, expected)
+ expected = pd.Index(['%s#foo' % c for c in self.frame.columns])
+ self.assert_index_equal(with_suffix.columns, expected)
class TestDataFrameMisc(tm.TestCase, SharedWithSparse, TestData):
diff --git a/pandas/tests/frame/test_missing.py b/pandas/tests/frame/test_missing.py
index 681ff8cf95dc9..8a6cbe44465c1 100644
--- a/pandas/tests/frame/test_missing.py
+++ b/pandas/tests/frame/test_missing.py
@@ -66,15 +66,17 @@ def test_dropIncompleteRows(self):
smaller_frame = frame.dropna()
assert_series_equal(frame['foo'], original)
inp_frame1.dropna(inplace=True)
- self.assert_numpy_array_equal(smaller_frame['foo'], mat[5:])
- self.assert_numpy_array_equal(inp_frame1['foo'], mat[5:])
+
+ exp = Series(mat[5:], index=self.frame.index[5:], name='foo')
+ tm.assert_series_equal(smaller_frame['foo'], exp)
+ tm.assert_series_equal(inp_frame1['foo'], exp)
samesize_frame = frame.dropna(subset=['bar'])
assert_series_equal(frame['foo'], original)
self.assertTrue((frame['bar'] == 5).all())
inp_frame2.dropna(subset=['bar'], inplace=True)
- self.assertTrue(samesize_frame.index.equals(self.frame.index))
- self.assertTrue(inp_frame2.index.equals(self.frame.index))
+ self.assert_index_equal(samesize_frame.index, self.frame.index)
+ self.assert_index_equal(inp_frame2.index, self.frame.index)
def test_dropna(self):
df = DataFrame(np.random.randn(6, 4))
diff --git a/pandas/tests/frame/test_mutate_columns.py b/pandas/tests/frame/test_mutate_columns.py
index 0d58dd5402aff..2bdd6657eaf18 100644
--- a/pandas/tests/frame/test_mutate_columns.py
+++ b/pandas/tests/frame/test_mutate_columns.py
@@ -5,7 +5,7 @@
from pandas.compat import range, lrange
import numpy as np
-from pandas import DataFrame, Series
+from pandas import DataFrame, Series, Index
from pandas.util.testing import (assert_series_equal,
assert_frame_equal,
@@ -123,12 +123,12 @@ def test_insert(self):
columns=['c', 'b', 'a'])
df.insert(0, 'foo', df['a'])
- self.assert_numpy_array_equal(df.columns, ['foo', 'c', 'b', 'a'])
+ self.assert_index_equal(df.columns, Index(['foo', 'c', 'b', 'a']))
tm.assert_series_equal(df['a'], df['foo'], check_names=False)
df.insert(2, 'bar', df['c'])
- self.assert_numpy_array_equal(df.columns,
- ['foo', 'c', 'bar', 'b', 'a'])
+ self.assert_index_equal(df.columns,
+ Index(['foo', 'c', 'bar', 'b', 'a']))
tm.assert_almost_equal(df['c'], df['bar'], check_names=False)
# diff dtype
diff --git a/pandas/tests/frame/test_operators.py b/pandas/tests/frame/test_operators.py
index 7dfada0d868fe..ee7c296f563f0 100644
--- a/pandas/tests/frame/test_operators.py
+++ b/pandas/tests/frame/test_operators.py
@@ -741,7 +741,7 @@ def test_combineFrame(self):
self.assertTrue(np.isnan(added['D']).all())
self_added = self.frame + self.frame
- self.assertTrue(self_added.index.equals(self.frame.index))
+ self.assert_index_equal(self_added.index, self.frame.index)
added_rev = frame_copy + self.frame
self.assertTrue(np.isnan(added['D']).all())
@@ -838,7 +838,7 @@ def test_combineSeries(self):
smaller_frame = self.tsframe[:-5]
smaller_added = smaller_frame.add(ts, axis='index')
- self.assertTrue(smaller_added.index.equals(self.tsframe.index))
+ self.assert_index_equal(smaller_added.index, self.tsframe.index)
smaller_ts = ts[:-5]
smaller_added2 = self.tsframe.add(smaller_ts, axis='index')
diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py
index 43c288162b134..066485e966a42 100644
--- a/pandas/tests/frame/test_reshape.py
+++ b/pandas/tests/frame/test_reshape.py
@@ -79,7 +79,7 @@ def test_pivot_integer_bug(self):
result = df.pivot(index=1, columns=0, values=2)
repr(result)
- self.assert_numpy_array_equal(result.columns, ['A', 'B'])
+ self.assert_index_equal(result.columns, Index(['A', 'B'], name=0))
def test_pivot_index_none(self):
# gh-3962
diff --git a/pandas/tests/frame/test_to_csv.py b/pandas/tests/frame/test_to_csv.py
index 9a16714e18be3..bacf604c491b1 100644
--- a/pandas/tests/frame/test_to_csv.py
+++ b/pandas/tests/frame/test_to_csv.py
@@ -626,7 +626,7 @@ def _make_frame(names=None):
exp = tsframe[:0]
exp.index = []
- self.assertTrue(recons.columns.equals(exp.columns))
+ self.assert_index_equal(recons.columns, exp.columns)
self.assertEqual(len(recons), 0)
def test_to_csv_float32_nanrep(self):
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index 0002bd840def3..e342eee2aabbb 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -607,7 +607,7 @@ def test_equals_op(self):
# assuming the 2nd to last item is unique in the data
item = index_a[-2]
tm.assert_numpy_array_equal(index_a == item, expected3)
- tm.assert_numpy_array_equal(series_a == item, expected3)
+ tm.assert_series_equal(series_a == item, Series(expected3))
def test_numpy_ufuncs(self):
# test ufuncs of numpy 1.9.2. see:
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 1591df5f1af2a..aa007c039f8ee 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -74,14 +74,14 @@ def test_constructor(self):
arr = np.array(self.strIndex)
index = Index(arr)
tm.assert_contains_all(arr, index)
- tm.assert_numpy_array_equal(self.strIndex, index)
+ tm.assert_index_equal(self.strIndex, index)
# copy
arr = np.array(self.strIndex)
index = Index(arr, copy=True, name='name')
tm.assertIsInstance(index, Index)
self.assertEqual(index.name, 'name')
- tm.assert_numpy_array_equal(arr, index)
+ tm.assert_numpy_array_equal(arr, index.values)
arr[0] = "SOMEBIGLONGSTRING"
self.assertNotEqual(index[0], "SOMEBIGLONGSTRING")
@@ -155,30 +155,28 @@ def test_constructor_from_series(self):
s = Series([Timestamp('20110101'), Timestamp('20120101'), Timestamp(
'20130101')])
result = Index(s)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
result = DatetimeIndex(s)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
# GH 6273
# create from a series, passing a freq
s = Series(pd.to_datetime(['1-1-1990', '2-1-1990', '3-1-1990',
'4-1-1990', '5-1-1990']))
result = DatetimeIndex(s, freq='MS')
- expected = DatetimeIndex(
- ['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', '5-1-1990'
- ], freq='MS')
- self.assertTrue(result.equals(expected))
+ expected = DatetimeIndex(['1-1-1990', '2-1-1990', '3-1-1990',
+ '4-1-1990', '5-1-1990'], freq='MS')
+ self.assert_index_equal(result, expected)
df = pd.DataFrame(np.random.rand(5, 3))
df['date'] = ['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990',
'5-1-1990']
result = DatetimeIndex(df['date'], freq='MS')
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
self.assertEqual(df['date'].dtype, object)
- exp = pd.Series(
- ['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', '5-1-1990'
- ], name='date')
+ exp = pd.Series(['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990',
+ '5-1-1990'], name='date')
self.assert_series_equal(df['date'], exp)
# GH 6274
@@ -202,26 +200,26 @@ def __array__(self, dtype=None):
date_range('2000-01-01', periods=3).values]:
expected = pd.Index(array)
result = pd.Index(ArrayLike(array))
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
def test_index_ctor_infer_periodindex(self):
xp = period_range('2012-1-1', freq='M', periods=3)
rs = Index(xp)
- tm.assert_numpy_array_equal(rs, xp)
+ tm.assert_index_equal(rs, xp)
tm.assertIsInstance(rs, PeriodIndex)
def test_constructor_simple_new(self):
idx = Index([1, 2, 3, 4, 5], name='int')
result = idx._simple_new(idx, 'int')
- self.assertTrue(result.equals(idx))
+ self.assert_index_equal(result, idx)
idx = Index([1.1, np.nan, 2.2, 3.0], name='float')
result = idx._simple_new(idx, 'float')
- self.assertTrue(result.equals(idx))
+ self.assert_index_equal(result, idx)
idx = Index(['A', 'B', 'C', np.nan], name='obj')
result = idx._simple_new(idx, 'obj')
- self.assertTrue(result.equals(idx))
+ self.assert_index_equal(result, idx)
def test_constructor_dtypes(self):
@@ -338,31 +336,31 @@ def test_insert(self):
result = Index(['b', 'c', 'd'])
# test 0th element
- self.assertTrue(Index(['a', 'b', 'c', 'd']).equals(result.insert(0,
- 'a')))
+ self.assert_index_equal(Index(['a', 'b', 'c', 'd']),
+ result.insert(0, 'a'))
# test Nth element that follows Python list behavior
- self.assertTrue(Index(['b', 'c', 'e', 'd']).equals(result.insert(-1,
- 'e')))
+ self.assert_index_equal(Index(['b', 'c', 'e', 'd']),
+ result.insert(-1, 'e'))
# test loc +/- neq (0, -1)
- self.assertTrue(result.insert(1, 'z').equals(result.insert(-2, 'z')))
+ self.assert_index_equal(result.insert(1, 'z'), result.insert(-2, 'z'))
# test empty
null_index = Index([])
- self.assertTrue(Index(['a']).equals(null_index.insert(0, 'a')))
+ self.assert_index_equal(Index(['a']), null_index.insert(0, 'a'))
def test_delete(self):
idx = Index(['a', 'b', 'c', 'd'], name='idx')
expected = Index(['b', 'c', 'd'], name='idx')
result = idx.delete(0)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
expected = Index(['a', 'b', 'c'], name='idx')
result = idx.delete(-1)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
with tm.assertRaises((IndexError, ValueError)):
@@ -525,14 +523,14 @@ def test_intersection(self):
idx2 = Index([3, 4, 5, 6, 7], name='idx')
expected2 = Index([3, 4, 5], name='idx')
result2 = idx1.intersection(idx2)
- self.assertTrue(result2.equals(expected2))
+ self.assert_index_equal(result2, expected2)
self.assertEqual(result2.name, expected2.name)
# if target name is different, it will be reset
idx3 = Index([3, 4, 5, 6, 7], name='other')
expected3 = Index([3, 4, 5], name=None)
result3 = idx1.intersection(idx3)
- self.assertTrue(result3.equals(expected3))
+ self.assert_index_equal(result3, expected3)
self.assertEqual(result3.name, expected3.name)
# non monotonic
@@ -552,7 +550,7 @@ def test_intersection(self):
idx2 = Index(['B', 'D'])
expected = Index(['B'], dtype='object')
result = idx1.intersection(idx2)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
# preserve names
first = self.strIndex[5:20]
@@ -677,11 +675,11 @@ def test_append_multiple(self):
foos = [index[:2], index[2:4], index[4:]]
result = foos[0].append(foos[1:])
- self.assertTrue(result.equals(index))
+ self.assert_index_equal(result, index)
# empty
result = index.append([])
- self.assertTrue(result.equals(index))
+ self.assert_index_equal(result, index)
def test_append_empty_preserve_name(self):
left = Index([], name='foo')
@@ -883,10 +881,10 @@ def test_get_indexer(self):
idx2 = Index([2, 4, 6])
r1 = idx1.get_indexer(idx2)
- assert_almost_equal(r1, [1, 3, -1])
+ assert_almost_equal(r1, np.array([1, 3, -1]))
r1 = idx2.get_indexer(idx1, method='pad')
- e1 = [-1, 0, 0, 1, 1]
+ e1 = np.array([-1, 0, 0, 1, 1])
assert_almost_equal(r1, e1)
r2 = idx2.get_indexer(idx1[::-1], method='pad')
@@ -896,7 +894,7 @@ def test_get_indexer(self):
assert_almost_equal(r1, rffill1)
r1 = idx2.get_indexer(idx1, method='backfill')
- e1 = [0, 0, 1, 1, 2]
+ e1 = np.array([0, 0, 1, 1, 2])
assert_almost_equal(r1, e1)
rbfill1 = idx2.get_indexer(idx1, method='bfill')
@@ -921,25 +919,25 @@ def test_get_indexer_nearest(self):
all_methods = ['pad', 'backfill', 'nearest']
for method in all_methods:
actual = idx.get_indexer([0, 5, 9], method=method)
- tm.assert_numpy_array_equal(actual, [0, 5, 9])
+ tm.assert_numpy_array_equal(actual, np.array([0, 5, 9]))
actual = idx.get_indexer([0, 5, 9], method=method, tolerance=0)
- tm.assert_numpy_array_equal(actual, [0, 5, 9])
+ tm.assert_numpy_array_equal(actual, np.array([0, 5, 9]))
- for method, expected in zip(all_methods, [[0, 1, 8], [1, 2, 9], [0, 2,
- 9]]):
+ for method, expected in zip(all_methods, [[0, 1, 8], [1, 2, 9],
+ [0, 2, 9]]):
actual = idx.get_indexer([0.2, 1.8, 8.5], method=method)
- tm.assert_numpy_array_equal(actual, expected)
+ tm.assert_numpy_array_equal(actual, np.array(expected))
actual = idx.get_indexer([0.2, 1.8, 8.5], method=method,
tolerance=1)
- tm.assert_numpy_array_equal(actual, expected)
+ tm.assert_numpy_array_equal(actual, np.array(expected))
for method, expected in zip(all_methods, [[0, -1, -1], [-1, 2, -1],
[0, 2, -1]]):
actual = idx.get_indexer([0.2, 1.8, 8.5], method=method,
tolerance=0.2)
- tm.assert_numpy_array_equal(actual, expected)
+ tm.assert_numpy_array_equal(actual, np.array(expected))
with tm.assertRaisesRegexp(ValueError, 'limit argument'):
idx.get_indexer([1, 0], method='nearest', limit=1)
@@ -950,22 +948,22 @@ def test_get_indexer_nearest_decreasing(self):
all_methods = ['pad', 'backfill', 'nearest']
for method in all_methods:
actual = idx.get_indexer([0, 5, 9], method=method)
- tm.assert_numpy_array_equal(actual, [9, 4, 0])
+ tm.assert_numpy_array_equal(actual, np.array([9, 4, 0]))
- for method, expected in zip(all_methods, [[8, 7, 0], [9, 8, 1], [9, 7,
- 0]]):
+ for method, expected in zip(all_methods, [[8, 7, 0], [9, 8, 1],
+ [9, 7, 0]]):
actual = idx.get_indexer([0.2, 1.8, 8.5], method=method)
- tm.assert_numpy_array_equal(actual, expected)
+ tm.assert_numpy_array_equal(actual, np.array(expected))
def test_get_indexer_strings(self):
idx = pd.Index(['b', 'c'])
actual = idx.get_indexer(['a', 'b', 'c', 'd'], method='pad')
- expected = [-1, 0, 1, 1]
+ expected = np.array([-1, 0, 1, 1])
tm.assert_numpy_array_equal(actual, expected)
actual = idx.get_indexer(['a', 'b', 'c', 'd'], method='backfill')
- expected = [0, 0, 1, -1]
+ expected = np.array([0, 0, 1, -1])
tm.assert_numpy_array_equal(actual, expected)
with tm.assertRaises(TypeError):
@@ -1086,7 +1084,7 @@ def check_slice(in_slice, expected):
in_slice.step)
result = idx[s_start:s_stop:in_slice.step]
expected = pd.Index(list(expected))
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
for in_slice, expected in [
(SLC[::-1], 'yxdcb'), (SLC['b':'y':-1], ''),
@@ -1108,7 +1106,7 @@ def test_drop(self):
drop = self.strIndex[lrange(5, 10)]
dropped = self.strIndex.drop(drop)
expected = self.strIndex[lrange(5) + lrange(10, n)]
- self.assertTrue(dropped.equals(expected))
+ self.assert_index_equal(dropped, expected)
self.assertRaises(ValueError, self.strIndex.drop, ['foo', 'bar'])
self.assertRaises(ValueError, self.strIndex.drop, ['1', 'bar'])
@@ -1161,13 +1159,13 @@ def test_tuple_union_bug(self):
# needs to be 1d like idx1 and idx2
expected = idx1[:4] # pandas.Index(sorted(set(idx1) & set(idx2)))
self.assertEqual(int_idx.ndim, 1)
- self.assertTrue(int_idx.equals(expected))
+ self.assert_index_equal(int_idx, expected)
# union broken
union_idx = idx1.union(idx2)
expected = idx2
self.assertEqual(union_idx.ndim, 1)
- self.assertTrue(union_idx.equals(expected))
+ self.assert_index_equal(union_idx, expected)
def test_is_monotonic_incomparable(self):
index = Index([5, datetime.now(), 7])
@@ -1202,21 +1200,22 @@ def test_isin(self):
self.assertEqual(result.dtype, np.bool_)
def test_isin_nan(self):
- tm.assert_numpy_array_equal(
- Index(['a', np.nan]).isin([np.nan]), [False, True])
- tm.assert_numpy_array_equal(
- Index(['a', pd.NaT]).isin([pd.NaT]), [False, True])
- tm.assert_numpy_array_equal(
- Index(['a', np.nan]).isin([float('nan')]), [False, False])
- tm.assert_numpy_array_equal(
- Index(['a', np.nan]).isin([pd.NaT]), [False, False])
+ tm.assert_numpy_array_equal(Index(['a', np.nan]).isin([np.nan]),
+ np.array([False, True]))
+ tm.assert_numpy_array_equal(Index(['a', pd.NaT]).isin([pd.NaT]),
+ np.array([False, True]))
+ tm.assert_numpy_array_equal(Index(['a', np.nan]).isin([float('nan')]),
+ np.array([False, False]))
+ tm.assert_numpy_array_equal(Index(['a', np.nan]).isin([pd.NaT]),
+ np.array([False, False]))
# Float64Index overrides isin, so must be checked separately
+ tm.assert_numpy_array_equal(Float64Index([1.0, np.nan]).isin([np.nan]),
+ np.array([False, True]))
tm.assert_numpy_array_equal(
- Float64Index([1.0, np.nan]).isin([np.nan]), [False, True])
- tm.assert_numpy_array_equal(
- Float64Index([1.0, np.nan]).isin([float('nan')]), [False, True])
- tm.assert_numpy_array_equal(
- Float64Index([1.0, np.nan]).isin([pd.NaT]), [False, True])
+ Float64Index([1.0, np.nan]).isin([float('nan')]),
+ np.array([False, True]))
+ tm.assert_numpy_array_equal(Float64Index([1.0, np.nan]).isin([pd.NaT]),
+ np.array([False, True]))
def test_isin_level_kwarg(self):
def check_idx(idx):
@@ -1255,7 +1254,7 @@ def test_boolean_cmp(self):
def test_get_level_values(self):
result = self.strIndex.get_level_values(0)
- self.assertTrue(result.equals(self.strIndex))
+ self.assert_index_equal(result, self.strIndex)
def test_slice_keep_name(self):
idx = Index(['a', 'b'], name='asdf')
@@ -1619,4 +1618,4 @@ def test_string_index_repr(self):
def test_get_combined_index():
from pandas.core.index import _get_combined_index
result = _get_combined_index([])
- assert (result.equals(Index([])))
+ tm.assert_index_equal(result, Index([]))
diff --git a/pandas/tests/indexes/test_category.py b/pandas/tests/indexes/test_category.py
index 7fff62b822e40..c64b1e9fc4af8 100644
--- a/pandas/tests/indexes/test_category.py
+++ b/pandas/tests/indexes/test_category.py
@@ -48,46 +48,48 @@ def test_construction(self):
# empty
result = CategoricalIndex(categories=categories)
- self.assertTrue(result.categories.equals(Index(categories)))
+ self.assert_index_equal(result.categories, Index(categories))
tm.assert_numpy_array_equal(result.codes, np.array([], dtype='int8'))
self.assertFalse(result.ordered)
# passing categories
result = CategoricalIndex(list('aabbca'), categories=categories)
- self.assertTrue(result.categories.equals(Index(categories)))
- tm.assert_numpy_array_equal(result.codes, np.array(
- [0, 0, 1, 1, 2, 0], dtype='int8'))
+ self.assert_index_equal(result.categories, Index(categories))
+ tm.assert_numpy_array_equal(result.codes,
+ np.array([0, 0, 1, 1, 2, 0], dtype='int8'))
c = pd.Categorical(list('aabbca'))
result = CategoricalIndex(c)
- self.assertTrue(result.categories.equals(Index(list('abc'))))
- tm.assert_numpy_array_equal(result.codes, np.array(
- [0, 0, 1, 1, 2, 0], dtype='int8'))
+ self.assert_index_equal(result.categories, Index(list('abc')))
+ tm.assert_numpy_array_equal(result.codes,
+ np.array([0, 0, 1, 1, 2, 0], dtype='int8'))
self.assertFalse(result.ordered)
result = CategoricalIndex(c, categories=categories)
- self.assertTrue(result.categories.equals(Index(categories)))
- tm.assert_numpy_array_equal(result.codes, np.array(
- [0, 0, 1, 1, 2, 0], dtype='int8'))
+ self.assert_index_equal(result.categories, Index(categories))
+ tm.assert_numpy_array_equal(result.codes,
+ np.array([0, 0, 1, 1, 2, 0], dtype='int8'))
self.assertFalse(result.ordered)
ci = CategoricalIndex(c, categories=list('abcd'))
result = CategoricalIndex(ci)
- self.assertTrue(result.categories.equals(Index(categories)))
- tm.assert_numpy_array_equal(result.codes, np.array(
- [0, 0, 1, 1, 2, 0], dtype='int8'))
+ self.assert_index_equal(result.categories, Index(categories))
+ tm.assert_numpy_array_equal(result.codes,
+ np.array([0, 0, 1, 1, 2, 0], dtype='int8'))
self.assertFalse(result.ordered)
result = CategoricalIndex(ci, categories=list('ab'))
- self.assertTrue(result.categories.equals(Index(list('ab'))))
- tm.assert_numpy_array_equal(result.codes, np.array(
- [0, 0, 1, 1, -1, 0], dtype='int8'))
+ self.assert_index_equal(result.categories, Index(list('ab')))
+ tm.assert_numpy_array_equal(result.codes,
+ np.array([0, 0, 1, 1, -1, 0],
+ dtype='int8'))
self.assertFalse(result.ordered)
result = CategoricalIndex(ci, categories=list('ab'), ordered=True)
- self.assertTrue(result.categories.equals(Index(list('ab'))))
- tm.assert_numpy_array_equal(result.codes, np.array(
- [0, 0, 1, 1, -1, 0], dtype='int8'))
+ self.assert_index_equal(result.categories, Index(list('ab')))
+ tm.assert_numpy_array_equal(result.codes,
+ np.array([0, 0, 1, 1, -1, 0],
+ dtype='int8'))
self.assertTrue(result.ordered)
# turn me to an Index
@@ -323,7 +325,7 @@ def test_astype(self):
tm.assert_index_equal(result, ci, exact=True)
result = ci.astype(object)
- self.assertTrue(result.equals(Index(np.array(ci))))
+ self.assert_index_equal(result, Index(np.array(ci)))
# this IS equal, but not the same class
self.assertTrue(result.equals(ci))
@@ -352,7 +354,7 @@ def test_reindexing(self):
expected = oidx.get_indexer_non_unique(finder)[0]
actual = ci.get_indexer(finder)
- tm.assert_numpy_array_equal(expected, actual)
+ tm.assert_numpy_array_equal(expected.values, actual, check_dtype=False)
def test_reindex_dtype(self):
c = CategoricalIndex(['a', 'b', 'c', 'a'])
@@ -401,7 +403,7 @@ def test_get_indexer(self):
for indexer in [idx2, list('abf'), Index(list('abf'))]:
r1 = idx1.get_indexer(idx2)
- assert_almost_equal(r1, [0, 1, 2, -1])
+ assert_almost_equal(r1, np.array([0, 1, 2, -1]))
self.assertRaises(NotImplementedError,
lambda: idx2.get_indexer(idx1, method='pad'))
diff --git a/pandas/tests/indexes/test_datetimelike.py b/pandas/tests/indexes/test_datetimelike.py
index b3b987ceb6ab6..4a664ed3542d7 100644
--- a/pandas/tests/indexes/test_datetimelike.py
+++ b/pandas/tests/indexes/test_datetimelike.py
@@ -353,7 +353,8 @@ def test_astype(self):
rng = date_range('1/1/2000', periods=10)
result = rng.astype('i8')
- self.assert_numpy_array_equal(result, rng.asi8)
+ self.assert_index_equal(result, Index(rng.asi8))
+ self.assert_numpy_array_equal(result.values, rng.asi8)
def test_astype_with_tz(self):
@@ -532,26 +533,29 @@ def test_get_loc(self):
# time indexing
idx = pd.date_range('2000-01-01', periods=24, freq='H')
- tm.assert_numpy_array_equal(idx.get_loc(time(12)), [12])
- tm.assert_numpy_array_equal(idx.get_loc(time(12, 30)), [])
+ tm.assert_numpy_array_equal(idx.get_loc(time(12)),
+ np.array([12], dtype=np.int64))
+ tm.assert_numpy_array_equal(idx.get_loc(time(12, 30)),
+ np.array([], dtype=np.int64))
with tm.assertRaises(NotImplementedError):
idx.get_loc(time(12, 30), method='pad')
def test_get_indexer(self):
idx = pd.date_range('2000-01-01', periods=3)
- tm.assert_numpy_array_equal(idx.get_indexer(idx), [0, 1, 2])
+ tm.assert_numpy_array_equal(idx.get_indexer(idx), np.array([0, 1, 2]))
target = idx[0] + pd.to_timedelta(['-1 hour', '12 hours',
'1 day 1 hour'])
- tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1])
- tm.assert_numpy_array_equal(
- idx.get_indexer(target, 'backfill'), [0, 1, 2])
- tm.assert_numpy_array_equal(
- idx.get_indexer(target, 'nearest'), [0, 1, 1])
+ tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'),
+ np.array([-1, 0, 1]))
+ tm.assert_numpy_array_equal(idx.get_indexer(target, 'backfill'),
+ np.array([0, 1, 2]))
+ tm.assert_numpy_array_equal(idx.get_indexer(target, 'nearest'),
+ np.array([0, 1, 1]))
tm.assert_numpy_array_equal(
idx.get_indexer(target, 'nearest',
tolerance=pd.Timedelta('1 hour')),
- [0, -1, 1])
+ np.array([0, -1, 1]))
with tm.assertRaises(ValueError):
idx.get_indexer(idx[[0]], method='nearest', tolerance='foo')
@@ -561,7 +565,7 @@ def test_roundtrip_pickle_with_tz(self):
# round-trip of timezone
index = date_range('20130101', periods=3, tz='US/Eastern', name='foo')
unpickled = self.round_trip_pickle(index)
- self.assertTrue(index.equals(unpickled))
+ self.assert_index_equal(index, unpickled)
def test_reindex_preserves_tz_if_target_is_empty_list_or_array(self):
# GH7774
@@ -752,7 +756,8 @@ def test_astype(self):
idx = period_range('1990', '2009', freq='A')
result = idx.astype('i8')
- self.assert_numpy_array_equal(result, idx.values)
+ self.assert_index_equal(result, Index(idx.asi8))
+ self.assert_numpy_array_equal(result.values, idx.values)
def test_astype_raises(self):
# GH 13149, GH 13209
@@ -843,25 +848,28 @@ def test_where_other(self):
def test_get_indexer(self):
idx = pd.period_range('2000-01-01', periods=3).asfreq('H', how='start')
- tm.assert_numpy_array_equal(idx.get_indexer(idx), [0, 1, 2])
+ tm.assert_numpy_array_equal(idx.get_indexer(idx),
+ np.array([0, 1, 2], dtype=np.int_))
target = pd.PeriodIndex(['1999-12-31T23', '2000-01-01T12',
'2000-01-02T01'], freq='H')
- tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1])
- tm.assert_numpy_array_equal(
- idx.get_indexer(target, 'backfill'), [0, 1, 2])
- tm.assert_numpy_array_equal(
- idx.get_indexer(target, 'nearest'), [0, 1, 1])
- tm.assert_numpy_array_equal(
- idx.get_indexer(target, 'nearest', tolerance='1 hour'),
- [0, -1, 1])
+ tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'),
+ np.array([-1, 0, 1], dtype=np.int_))
+ tm.assert_numpy_array_equal(idx.get_indexer(target, 'backfill'),
+ np.array([0, 1, 2], dtype=np.int_))
+ tm.assert_numpy_array_equal(idx.get_indexer(target, 'nearest'),
+ np.array([0, 1, 1], dtype=np.int_))
+ tm.assert_numpy_array_equal(idx.get_indexer(target, 'nearest',
+ tolerance='1 hour'),
+ np.array([0, -1, 1], dtype=np.int_))
msg = 'Input has different freq from PeriodIndex\\(freq=H\\)'
with self.assertRaisesRegexp(ValueError, msg):
idx.get_indexer(target, 'nearest', tolerance='1 minute')
- tm.assert_numpy_array_equal(
- idx.get_indexer(target, 'nearest', tolerance='1 day'), [0, 1, 1])
+ tm.assert_numpy_array_equal(idx.get_indexer(target, 'nearest',
+ tolerance='1 day'),
+ np.array([0, 1, 1], dtype=np.int_))
def test_repeat(self):
# GH10183
@@ -956,7 +964,8 @@ def test_astype(self):
rng = timedelta_range('1 days', periods=10)
result = rng.astype('i8')
- self.assert_numpy_array_equal(result, rng.asi8)
+ self.assert_index_equal(result, Index(rng.asi8))
+ self.assert_numpy_array_equal(rng.asi8, result.values)
def test_astype_timedelta64(self):
# GH 13149, GH 13209
@@ -1005,18 +1014,20 @@ def test_get_loc(self):
def test_get_indexer(self):
idx = pd.to_timedelta(['0 days', '1 days', '2 days'])
- tm.assert_numpy_array_equal(idx.get_indexer(idx), [0, 1, 2])
+ tm.assert_numpy_array_equal(idx.get_indexer(idx),
+ np.array([0, 1, 2], dtype=np.int_))
target = pd.to_timedelta(['-1 hour', '12 hours', '1 day 1 hour'])
- tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1])
- tm.assert_numpy_array_equal(
- idx.get_indexer(target, 'backfill'), [0, 1, 2])
- tm.assert_numpy_array_equal(
- idx.get_indexer(target, 'nearest'), [0, 1, 1])
- tm.assert_numpy_array_equal(
- idx.get_indexer(target, 'nearest',
- tolerance=pd.Timedelta('1 hour')),
- [0, -1, 1])
+ tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'),
+ np.array([-1, 0, 1], dtype=np.int_))
+ tm.assert_numpy_array_equal(idx.get_indexer(target, 'backfill'),
+ np.array([0, 1, 2], dtype=np.int_))
+ tm.assert_numpy_array_equal(idx.get_indexer(target, 'nearest'),
+ np.array([0, 1, 1], dtype=np.int_))
+
+ res = idx.get_indexer(target, 'nearest',
+ tolerance=pd.Timedelta('1 hour'))
+ tm.assert_numpy_array_equal(res, np.array([0, -1, 1], dtype=np.int_))
def test_numeric_compat(self):
diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py
index 10d87abf0d886..bec52f5f47b09 100644
--- a/pandas/tests/indexes/test_multi.py
+++ b/pandas/tests/indexes/test_multi.py
@@ -644,7 +644,7 @@ def test_from_product(self):
('buz', 'c')]
expected = MultiIndex.from_tuples(tuples, names=names)
- tm.assert_numpy_array_equal(result, expected)
+ tm.assert_index_equal(result, expected)
self.assertEqual(result.names, names)
def test_from_product_datetimeindex(self):
@@ -681,14 +681,14 @@ def test_append(self):
def test_get_level_values(self):
result = self.index.get_level_values(0)
- expected = ['foo', 'foo', 'bar', 'baz', 'qux', 'qux']
- tm.assert_numpy_array_equal(result, expected)
-
+ expected = Index(['foo', 'foo', 'bar', 'baz', 'qux', 'qux'],
+ name='first')
+ tm.assert_index_equal(result, expected)
self.assertEqual(result.name, 'first')
result = self.index.get_level_values('first')
expected = self.index.get_level_values(0)
- tm.assert_numpy_array_equal(result, expected)
+ tm.assert_index_equal(result, expected)
# GH 10460
index = MultiIndex(levels=[CategoricalIndex(
@@ -703,19 +703,19 @@ def test_get_level_values_na(self):
arrays = [['a', 'b', 'b'], [1, np.nan, 2]]
index = pd.MultiIndex.from_arrays(arrays)
values = index.get_level_values(1)
- expected = [1, np.nan, 2]
+ expected = np.array([1, np.nan, 2])
tm.assert_numpy_array_equal(values.values.astype(float), expected)
arrays = [['a', 'b', 'b'], [np.nan, np.nan, 2]]
index = pd.MultiIndex.from_arrays(arrays)
values = index.get_level_values(1)
- expected = [np.nan, np.nan, 2]
+ expected = np.array([np.nan, np.nan, 2])
tm.assert_numpy_array_equal(values.values.astype(float), expected)
arrays = [[np.nan, np.nan, np.nan], ['a', np.nan, 1]]
index = pd.MultiIndex.from_arrays(arrays)
values = index.get_level_values(0)
- expected = [np.nan, np.nan, np.nan]
+ expected = np.array([np.nan, np.nan, np.nan])
tm.assert_numpy_array_equal(values.values.astype(float), expected)
values = index.get_level_values(1)
expected = np.array(['a', np.nan, 1], dtype=object)
@@ -1031,10 +1031,10 @@ def test_get_indexer(self):
idx2 = index[[1, 3, 5]]
r1 = idx1.get_indexer(idx2)
- assert_almost_equal(r1, [1, 3, -1])
+ assert_almost_equal(r1, np.array([1, 3, -1]))
r1 = idx2.get_indexer(idx1, method='pad')
- e1 = [-1, 0, 0, 1, 1]
+ e1 = np.array([-1, 0, 0, 1, 1])
assert_almost_equal(r1, e1)
r2 = idx2.get_indexer(idx1[::-1], method='pad')
@@ -1044,7 +1044,7 @@ def test_get_indexer(self):
assert_almost_equal(r1, rffill1)
r1 = idx2.get_indexer(idx1, method='backfill')
- e1 = [0, 0, 1, 1, 2]
+ e1 = np.array([0, 0, 1, 1, 2])
assert_almost_equal(r1, e1)
r2 = idx2.get_indexer(idx1[::-1], method='backfill')
@@ -1064,9 +1064,10 @@ def test_get_indexer(self):
# create index with duplicates
idx1 = Index(lrange(10) + lrange(10))
idx2 = Index(lrange(20))
- assertRaisesRegexp(InvalidIndexError, "Reindexing only valid with"
- " uniquely valued Index objects", idx1.get_indexer,
- idx2)
+
+ msg = "Reindexing only valid with uniquely valued Index objects"
+ with assertRaisesRegexp(InvalidIndexError, msg):
+ idx1.get_indexer(idx2)
def test_get_indexer_nearest(self):
midx = MultiIndex.from_tuples([('a', 1), ('b', 2)])
@@ -1524,15 +1525,18 @@ def test_insert(self):
# key not contained in all levels
new_index = self.index.insert(0, ('abc', 'three'))
- tm.assert_numpy_array_equal(new_index.levels[0],
- list(self.index.levels[0]) + ['abc'])
- tm.assert_numpy_array_equal(new_index.levels[1],
- list(self.index.levels[1]) + ['three'])
+
+ exp0 = Index(list(self.index.levels[0]) + ['abc'], name='first')
+ tm.assert_index_equal(new_index.levels[0], exp0)
+
+ exp1 = Index(list(self.index.levels[1]) + ['three'], name='second')
+ tm.assert_index_equal(new_index.levels[1], exp1)
self.assertEqual(new_index[0], ('abc', 'three'))
# key wrong length
- assertRaisesRegexp(ValueError, "Item must have length equal to number"
- " of levels", self.index.insert, 0, ('foo2', ))
+ msg = "Item must have length equal to number of levels"
+ with assertRaisesRegexp(ValueError, msg):
+ self.index.insert(0, ('foo2', ))
left = pd.DataFrame([['a', 'b', 0], ['b', 'd', 1]],
columns=['1st', '2nd', '3rd'])
@@ -1553,14 +1557,9 @@ def test_insert(self):
ts.loc[('a', 'w')] = 5
ts.loc['a', 'a'] = 6
- right = pd.DataFrame([['a', 'b', 0],
- ['b', 'd', 1],
- ['b', 'x', 2],
- ['b', 'a', -1],
- ['b', 'b', 3],
- ['a', 'x', 4],
- ['a', 'w', 5],
- ['a', 'a', 6]],
+ right = pd.DataFrame([['a', 'b', 0], ['b', 'd', 1], ['b', 'x', 2],
+ ['b', 'a', -1], ['b', 'b', 3], ['a', 'x', 4],
+ ['a', 'w', 5], ['a', 'a', 6]],
columns=['1st', '2nd', '3rd'])
right.set_index(['1st', '2nd'], inplace=True)
# FIXME data types changes to float because
@@ -2001,9 +2000,9 @@ def test_isin(self):
def test_isin_nan(self):
idx = MultiIndex.from_arrays([['foo', 'bar'], [1.0, np.nan]])
tm.assert_numpy_array_equal(idx.isin([('bar', np.nan)]),
- [False, False])
+ np.array([False, False]))
tm.assert_numpy_array_equal(idx.isin([('bar', float('nan'))]),
- [False, False])
+ np.array([False, False]))
def test_isin_level_kwarg(self):
idx = MultiIndex.from_arrays([['qux', 'baz', 'foo', 'bar'], np.arange(
diff --git a/pandas/tests/indexes/test_numeric.py b/pandas/tests/indexes/test_numeric.py
index 1247e4dc62997..5eac0bc870756 100644
--- a/pandas/tests/indexes/test_numeric.py
+++ b/pandas/tests/indexes/test_numeric.py
@@ -158,6 +158,7 @@ def check_is_index(self, i):
def check_coerce(self, a, b, is_float_index=True):
self.assertTrue(a.equals(b))
+ self.assert_index_equal(a, b, exact=False)
if is_float_index:
self.assertIsInstance(b, Float64Index)
else:
@@ -282,14 +283,16 @@ def test_equals(self):
def test_get_indexer(self):
idx = Float64Index([0.0, 1.0, 2.0])
- tm.assert_numpy_array_equal(idx.get_indexer(idx), [0, 1, 2])
+ tm.assert_numpy_array_equal(idx.get_indexer(idx),
+ np.array([0, 1, 2], dtype=np.int_))
target = [-0.1, 0.5, 1.1]
- tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1])
- tm.assert_numpy_array_equal(
- idx.get_indexer(target, 'backfill'), [0, 1, 2])
- tm.assert_numpy_array_equal(
- idx.get_indexer(target, 'nearest'), [0, 1, 1])
+ tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'),
+ np.array([-1, 0, 1], dtype=np.int_))
+ tm.assert_numpy_array_equal(idx.get_indexer(target, 'backfill'),
+ np.array([0, 1, 2], dtype=np.int_))
+ tm.assert_numpy_array_equal(idx.get_indexer(target, 'nearest'),
+ np.array([0, 1, 1], dtype=np.int_))
def test_get_loc(self):
idx = Float64Index([0.0, 1.0, 2.0])
@@ -425,12 +428,12 @@ def testit():
def test_constructor(self):
# pass list, coerce fine
index = Int64Index([-5, 0, 1, 2])
- expected = np.array([-5, 0, 1, 2], dtype=np.int64)
- tm.assert_numpy_array_equal(index, expected)
+ expected = Index([-5, 0, 1, 2], dtype=np.int64)
+ tm.assert_index_equal(index, expected)
# from iterable
index = Int64Index(iter([-5, 0, 1, 2]))
- tm.assert_numpy_array_equal(index, expected)
+ tm.assert_index_equal(index, expected)
# scalar raise Exception
self.assertRaises(TypeError, Int64Index, 5)
@@ -438,7 +441,7 @@ def test_constructor(self):
# copy
arr = self.index.values
new_index = Int64Index(arr, copy=True)
- tm.assert_numpy_array_equal(new_index, self.index)
+ tm.assert_index_equal(new_index, self.index)
val = arr[0] + 3000
# this should not change index
@@ -457,7 +460,7 @@ def test_constructor_corner(self):
arr = np.array([1, 2, 3, 4], dtype=object)
index = Int64Index(arr)
self.assertEqual(index.values.dtype, np.int64)
- self.assertTrue(index.equals(arr))
+ self.assert_index_equal(index, Index(arr))
# preventing casting
arr = np.array([1, '2', 3, '4'], dtype=object)
@@ -581,7 +584,7 @@ def test_join_outer(self):
res, lidx, ridx = self.index.join(other, how='outer',
return_indexers=True)
noidx_res = self.index.join(other, how='outer')
- self.assertTrue(res.equals(noidx_res))
+ self.assert_index_equal(res, noidx_res)
eres = Int64Index([0, 1, 2, 4, 5, 6, 7, 8, 10, 12, 14, 16, 18, 25])
elidx = np.array([0, -1, 1, 2, -1, 3, -1, 4, 5, 6, 7, 8, 9, -1],
@@ -590,7 +593,7 @@ def test_join_outer(self):
dtype=np.int_)
tm.assertIsInstance(res, Int64Index)
- self.assertTrue(res.equals(eres))
+ self.assert_index_equal(res, eres)
tm.assert_numpy_array_equal(lidx, elidx)
tm.assert_numpy_array_equal(ridx, eridx)
@@ -598,14 +601,14 @@ def test_join_outer(self):
res, lidx, ridx = self.index.join(other_mono, how='outer',
return_indexers=True)
noidx_res = self.index.join(other_mono, how='outer')
- self.assertTrue(res.equals(noidx_res))
+ self.assert_index_equal(res, noidx_res)
elidx = np.array([0, -1, 1, 2, -1, 3, -1, 4, 5, 6, 7, 8, 9, -1],
dtype=np.int64)
eridx = np.array([-1, 0, 1, -1, 2, -1, 3, -1, -1, 4, -1, -1, -1, 5],
dtype=np.int64)
tm.assertIsInstance(res, Int64Index)
- self.assertTrue(res.equals(eres))
+ self.assert_index_equal(res, eres)
tm.assert_numpy_array_equal(lidx, elidx)
tm.assert_numpy_array_equal(ridx, eridx)
@@ -628,7 +631,7 @@ def test_join_inner(self):
eridx = np.array([4, 1], dtype=np.int_)
tm.assertIsInstance(res, Int64Index)
- self.assertTrue(res.equals(eres))
+ self.assert_index_equal(res, eres)
tm.assert_numpy_array_equal(lidx, elidx)
tm.assert_numpy_array_equal(ridx, eridx)
@@ -637,12 +640,12 @@ def test_join_inner(self):
return_indexers=True)
res2 = self.index.intersection(other_mono)
- self.assertTrue(res.equals(res2))
+ self.assert_index_equal(res, res2)
elidx = np.array([1, 6], dtype=np.int64)
eridx = np.array([1, 4], dtype=np.int64)
tm.assertIsInstance(res, Int64Index)
- self.assertTrue(res.equals(eres))
+ self.assert_index_equal(res, eres)
tm.assert_numpy_array_equal(lidx, elidx)
tm.assert_numpy_array_equal(ridx, eridx)
@@ -658,7 +661,7 @@ def test_join_left(self):
dtype=np.int_)
tm.assertIsInstance(res, Int64Index)
- self.assertTrue(res.equals(eres))
+ self.assert_index_equal(res, eres)
self.assertIsNone(lidx)
tm.assert_numpy_array_equal(ridx, eridx)
@@ -668,7 +671,7 @@ def test_join_left(self):
eridx = np.array([-1, 1, -1, -1, -1, -1, 4, -1, -1, -1],
dtype=np.int64)
tm.assertIsInstance(res, Int64Index)
- self.assertTrue(res.equals(eres))
+ self.assert_index_equal(res, eres)
self.assertIsNone(lidx)
tm.assert_numpy_array_equal(ridx, eridx)
@@ -679,7 +682,7 @@ def test_join_left(self):
eres = Index([1, 1, 2, 5, 7, 9]) # 1 is in idx2, so it should be x2
eridx = np.array([0, 1, 2, 3, -1, -1], dtype=np.int64)
elidx = np.array([0, 0, 1, 2, 3, 4], dtype=np.int64)
- self.assertTrue(res.equals(eres))
+ self.assert_index_equal(res, eres)
tm.assert_numpy_array_equal(lidx, elidx)
tm.assert_numpy_array_equal(ridx, eridx)
@@ -694,7 +697,7 @@ def test_join_right(self):
elidx = np.array([-1, 6, -1, -1, 1, -1], dtype=np.int_)
tm.assertIsInstance(other, Int64Index)
- self.assertTrue(res.equals(eres))
+ self.assert_index_equal(res, eres)
tm.assert_numpy_array_equal(lidx, elidx)
self.assertIsNone(ridx)
@@ -704,7 +707,7 @@ def test_join_right(self):
eres = other_mono
elidx = np.array([-1, 1, -1, -1, 6, -1], dtype=np.int64)
tm.assertIsInstance(other, Int64Index)
- self.assertTrue(res.equals(eres))
+ self.assert_index_equal(res, eres)
tm.assert_numpy_array_equal(lidx, elidx)
self.assertIsNone(ridx)
@@ -715,7 +718,7 @@ def test_join_right(self):
eres = Index([1, 1, 2, 5, 7, 9]) # 1 is in idx2, so it should be x2
elidx = np.array([0, 1, 2, 3, -1, -1], dtype=np.int64)
eridx = np.array([0, 0, 1, 2, 3, 4], dtype=np.int64)
- self.assertTrue(res.equals(eres))
+ self.assert_index_equal(res, eres)
tm.assert_numpy_array_equal(lidx, elidx)
tm.assert_numpy_array_equal(ridx, eridx)
@@ -724,28 +727,27 @@ def test_join_non_int_index(self):
outer = self.index.join(other, how='outer')
outer2 = other.join(self.index, how='outer')
- expected = Index([0, 2, 3, 4, 6, 7, 8, 10, 12, 14,
- 16, 18], dtype=object)
- self.assertTrue(outer.equals(outer2))
- self.assertTrue(outer.equals(expected))
+ expected = Index([0, 2, 3, 4, 6, 7, 8, 10, 12, 14, 16, 18])
+ self.assert_index_equal(outer, outer2)
+ self.assert_index_equal(outer, expected)
inner = self.index.join(other, how='inner')
inner2 = other.join(self.index, how='inner')
- expected = Index([6, 8, 10], dtype=object)
- self.assertTrue(inner.equals(inner2))
- self.assertTrue(inner.equals(expected))
+ expected = Index([6, 8, 10])
+ self.assert_index_equal(inner, inner2)
+ self.assert_index_equal(inner, expected)
left = self.index.join(other, how='left')
- self.assertTrue(left.equals(self.index))
+ self.assert_index_equal(left, self.index.astype(object))
left2 = other.join(self.index, how='left')
- self.assertTrue(left2.equals(other))
+ self.assert_index_equal(left2, other)
right = self.index.join(other, how='right')
- self.assertTrue(right.equals(other))
+ self.assert_index_equal(right, other)
right2 = other.join(self.index, how='right')
- self.assertTrue(right2.equals(self.index))
+ self.assert_index_equal(right2, self.index.astype(object))
def test_join_non_unique(self):
left = Index([4, 4, 3, 3])
@@ -753,7 +755,7 @@ def test_join_non_unique(self):
joined, lidx, ridx = left.join(left, return_indexers=True)
exp_joined = Index([3, 3, 3, 3, 4, 4, 4, 4])
- self.assertTrue(joined.equals(exp_joined))
+ self.assert_index_equal(joined, exp_joined)
exp_lidx = np.array([2, 2, 3, 3, 0, 0, 1, 1], dtype=np.int_)
tm.assert_numpy_array_equal(lidx, exp_lidx)
@@ -770,13 +772,14 @@ def test_join_self(self):
def test_intersection(self):
other = Index([1, 2, 3, 4, 5])
result = self.index.intersection(other)
- expected = np.sort(np.intersect1d(self.index.values, other.values))
- tm.assert_numpy_array_equal(result, expected)
+ expected = Index(np.sort(np.intersect1d(self.index.values,
+ other.values)))
+ tm.assert_index_equal(result, expected)
result = other.intersection(self.index)
- expected = np.sort(np.asarray(np.intersect1d(self.index.values,
- other.values)))
- tm.assert_numpy_array_equal(result, expected)
+ expected = Index(np.sort(np.asarray(np.intersect1d(self.index.values,
+ other.values))))
+ tm.assert_index_equal(result, expected)
def test_intersect_str_dates(self):
dt_dates = [datetime(2012, 2, 9), datetime(2012, 2, 22)]
@@ -793,12 +796,12 @@ def test_union_noncomparable(self):
now = datetime.now()
other = Index([now + timedelta(i) for i in range(4)], dtype=object)
result = self.index.union(other)
- expected = np.concatenate((self.index, other))
- tm.assert_numpy_array_equal(result, expected)
+ expected = Index(np.concatenate((self.index, other)))
+ tm.assert_index_equal(result, expected)
result = other.union(self.index)
- expected = np.concatenate((other, self.index))
- tm.assert_numpy_array_equal(result, expected)
+ expected = Index(np.concatenate((other, self.index)))
+ tm.assert_index_equal(result, expected)
def test_cant_or_shouldnt_cast(self):
# can't
diff --git a/pandas/tests/indexes/test_range.py b/pandas/tests/indexes/test_range.py
index 8b04b510146d2..99e4b72bcee37 100644
--- a/pandas/tests/indexes/test_range.py
+++ b/pandas/tests/indexes/test_range.py
@@ -102,10 +102,10 @@ def test_constructor_same(self):
self.assertTrue(result.identical(index))
result = RangeIndex(index, copy=True)
- self.assertTrue(result.equals(index))
+ self.assert_index_equal(result, index, exact=True)
result = RangeIndex(index)
- self.assertTrue(result.equals(index))
+ self.assert_index_equal(result, index, exact=True)
self.assertRaises(TypeError,
lambda: RangeIndex(index, dtype='float64'))
@@ -116,24 +116,24 @@ def test_constructor_range(self):
result = RangeIndex.from_range(range(1, 5, 2))
expected = RangeIndex(1, 5, 2)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected, exact=True)
result = RangeIndex.from_range(range(5, 6))
expected = RangeIndex(5, 6, 1)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected, exact=True)
# an invalid range
result = RangeIndex.from_range(range(5, 1))
expected = RangeIndex(0, 0, 1)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected, exact=True)
result = RangeIndex.from_range(range(5))
expected = RangeIndex(0, 5, 1)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected, exact=True)
result = Index(range(1, 5, 2))
expected = RangeIndex(1, 5, 2)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected, exact=True)
self.assertRaises(TypeError,
lambda: Index(range(1, 5, 2), dtype='float64'))
@@ -165,27 +165,28 @@ def test_numeric_compat2(self):
result = idx * 2
expected = RangeIndex(0, 20, 4)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected, exact=True)
result = idx + 2
expected = RangeIndex(2, 12, 2)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected, exact=True)
result = idx - 2
expected = RangeIndex(-2, 8, 2)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected, exact=True)
# truediv under PY3
result = idx / 2
+
if PY3:
- expected = RangeIndex(0, 5, 1)
- else:
expected = RangeIndex(0, 5, 1).astype('float64')
- self.assertTrue(result.equals(expected))
+ else:
+ expected = RangeIndex(0, 5, 1)
+ self.assert_index_equal(result, expected, exact=True)
result = idx / 4
- expected = RangeIndex(0, 10, 2).values / 4
- self.assertTrue(result.equals(expected))
+ expected = RangeIndex(0, 10, 2) / 4
+ self.assert_index_equal(result, expected, exact=True)
result = idx // 1
expected = idx
@@ -220,7 +221,7 @@ def test_constructor_corner(self):
arr = np.array([1, 2, 3, 4], dtype=object)
index = RangeIndex(1, 5)
self.assertEqual(index.values.dtype, np.int64)
- self.assertTrue(index.equals(arr))
+ self.assert_index_equal(index, Index(arr))
# non-int raise Exception
self.assertRaises(TypeError, RangeIndex, '1', '10', '1')
@@ -249,7 +250,7 @@ def test_repr(self):
self.assertTrue(result, expected)
result = eval(result)
- self.assertTrue(result.equals(i))
+ self.assert_index_equal(result, i, exact=True)
i = RangeIndex(5, 0, -1)
result = repr(i)
@@ -257,7 +258,7 @@ def test_repr(self):
self.assertEqual(result, expected)
result = eval(result)
- self.assertTrue(result.equals(i))
+ self.assert_index_equal(result, i, exact=True)
def test_insert(self):
@@ -265,19 +266,19 @@ def test_insert(self):
result = idx[1:4]
# test 0th element
- self.assertTrue(idx[0:4].equals(result.insert(0, idx[0])))
+ self.assert_index_equal(idx[0:4], result.insert(0, idx[0]))
def test_delete(self):
idx = RangeIndex(5, name='Foo')
expected = idx[1:].astype(int)
result = idx.delete(0)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
expected = idx[:-1].astype(int)
result = idx.delete(-1)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
with tm.assertRaises((IndexError, ValueError)):
@@ -292,7 +293,7 @@ def test_view(self):
self.assertEqual(i_view.name, 'Foo')
i_view = i.view('i8')
- tm.assert_numpy_array_equal(i, i_view)
+ tm.assert_numpy_array_equal(i.values, i_view)
i_view = i.view(RangeIndex)
tm.assert_index_equal(i, i_view)
@@ -376,7 +377,7 @@ def test_join_outer(self):
res, lidx, ridx = self.index.join(other, how='outer',
return_indexers=True)
noidx_res = self.index.join(other, how='outer')
- self.assertTrue(res.equals(noidx_res))
+ self.assert_index_equal(res, noidx_res)
eres = Int64Index([0, 2, 4, 6, 8, 10, 12, 14, 15, 16, 17, 18, 19, 20,
21, 22, 23, 24, 25])
@@ -387,7 +388,7 @@ def test_join_outer(self):
self.assertIsInstance(res, Int64Index)
self.assertFalse(isinstance(res, RangeIndex))
- self.assertTrue(res.equals(eres))
+ self.assert_index_equal(res, eres)
self.assert_numpy_array_equal(lidx, elidx)
self.assert_numpy_array_equal(ridx, eridx)
@@ -397,11 +398,11 @@ def test_join_outer(self):
res, lidx, ridx = self.index.join(other, how='outer',
return_indexers=True)
noidx_res = self.index.join(other, how='outer')
- self.assertTrue(res.equals(noidx_res))
+ self.assert_index_equal(res, noidx_res)
self.assertIsInstance(res, Int64Index)
self.assertFalse(isinstance(res, RangeIndex))
- self.assertTrue(res.equals(eres))
+ self.assert_index_equal(res, eres)
self.assert_numpy_array_equal(lidx, elidx)
self.assert_numpy_array_equal(ridx, eridx)
@@ -423,7 +424,7 @@ def test_join_inner(self):
eridx = np.array([9, 7])
self.assertIsInstance(res, Int64Index)
- self.assertTrue(res.equals(eres))
+ self.assert_index_equal(res, eres)
self.assert_numpy_array_equal(lidx, elidx)
self.assert_numpy_array_equal(ridx, eridx)
@@ -434,7 +435,7 @@ def test_join_inner(self):
return_indexers=True)
self.assertIsInstance(res, RangeIndex)
- self.assertTrue(res.equals(eres))
+ self.assert_index_equal(res, eres)
self.assert_numpy_array_equal(lidx, elidx)
self.assert_numpy_array_equal(ridx, eridx)
@@ -448,7 +449,7 @@ def test_join_left(self):
eridx = np.array([-1, -1, -1, -1, -1, -1, -1, -1, 9, 7], dtype=np.int_)
self.assertIsInstance(res, RangeIndex)
- self.assertTrue(res.equals(eres))
+ self.assert_index_equal(res, eres)
self.assertIsNone(lidx)
self.assert_numpy_array_equal(ridx, eridx)
@@ -459,7 +460,7 @@ def test_join_left(self):
return_indexers=True)
self.assertIsInstance(res, RangeIndex)
- self.assertTrue(res.equals(eres))
+ self.assert_index_equal(res, eres)
self.assertIsNone(lidx)
self.assert_numpy_array_equal(ridx, eridx)
@@ -474,7 +475,7 @@ def test_join_right(self):
dtype=np.int_)
self.assertIsInstance(other, Int64Index)
- self.assertTrue(res.equals(eres))
+ self.assert_index_equal(res, eres)
self.assert_numpy_array_equal(lidx, elidx)
self.assertIsNone(ridx)
@@ -486,7 +487,7 @@ def test_join_right(self):
eres = other
self.assertIsInstance(other, RangeIndex)
- self.assertTrue(res.equals(eres))
+ self.assert_index_equal(res, eres)
self.assert_numpy_array_equal(lidx, elidx)
self.assertIsNone(ridx)
@@ -495,28 +496,27 @@ def test_join_non_int_index(self):
outer = self.index.join(other, how='outer')
outer2 = other.join(self.index, how='outer')
- expected = Index([0, 2, 3, 4, 6, 7, 8, 10, 12, 14,
- 16, 18], dtype=object)
- self.assertTrue(outer.equals(outer2))
- self.assertTrue(outer.equals(expected))
+ expected = Index([0, 2, 3, 4, 6, 7, 8, 10, 12, 14, 16, 18])
+ self.assert_index_equal(outer, outer2)
+ self.assert_index_equal(outer, expected)
inner = self.index.join(other, how='inner')
inner2 = other.join(self.index, how='inner')
- expected = Index([6, 8, 10], dtype=object)
- self.assertTrue(inner.equals(inner2))
- self.assertTrue(inner.equals(expected))
+ expected = Index([6, 8, 10])
+ self.assert_index_equal(inner, inner2)
+ self.assert_index_equal(inner, expected)
left = self.index.join(other, how='left')
- self.assertTrue(left.equals(self.index))
+ self.assert_index_equal(left, self.index.astype(object))
left2 = other.join(self.index, how='left')
- self.assertTrue(left2.equals(other))
+ self.assert_index_equal(left2, other)
right = self.index.join(other, how='right')
- self.assertTrue(right.equals(other))
+ self.assert_index_equal(right, other)
right2 = other.join(self.index, how='right')
- self.assertTrue(right2.equals(self.index))
+ self.assert_index_equal(right2, self.index.astype(object))
def test_join_non_unique(self):
other = Index([4, 4, 3, 3])
@@ -528,7 +528,7 @@ def test_join_non_unique(self):
eridx = np.array([-1, -1, 0, 1, -1, -1, -1, -1, -1, -1, -1],
dtype=np.int_)
- self.assertTrue(res.equals(eres))
+ self.assert_index_equal(res, eres)
self.assert_numpy_array_equal(lidx, elidx)
self.assert_numpy_array_equal(ridx, eridx)
@@ -542,25 +542,28 @@ def test_intersection(self):
# intersect with Int64Index
other = Index(np.arange(1, 6))
result = self.index.intersection(other)
- expected = np.sort(np.intersect1d(self.index.values, other.values))
- self.assert_numpy_array_equal(result, expected)
+ expected = Index(np.sort(np.intersect1d(self.index.values,
+ other.values)))
+ self.assert_index_equal(result, expected)
result = other.intersection(self.index)
- expected = np.sort(np.asarray(np.intersect1d(self.index.values,
- other.values)))
- self.assert_numpy_array_equal(result, expected)
+ expected = Index(np.sort(np.asarray(np.intersect1d(self.index.values,
+ other.values))))
+ self.assert_index_equal(result, expected)
# intersect with increasing RangeIndex
other = RangeIndex(1, 6)
result = self.index.intersection(other)
- expected = np.sort(np.intersect1d(self.index.values, other.values))
- self.assert_numpy_array_equal(result, expected)
+ expected = Index(np.sort(np.intersect1d(self.index.values,
+ other.values)))
+ self.assert_index_equal(result, expected)
# intersect with decreasing RangeIndex
other = RangeIndex(5, 0, -1)
result = self.index.intersection(other)
- expected = np.sort(np.intersect1d(self.index.values, other.values))
- self.assert_numpy_array_equal(result, expected)
+ expected = Index(np.sort(np.intersect1d(self.index.values,
+ other.values)))
+ self.assert_index_equal(result, expected)
def test_intersect_str_dates(self):
dt_dates = [datetime(2012, 2, 9), datetime(2012, 2, 22)]
@@ -577,12 +580,12 @@ def test_union_noncomparable(self):
now = datetime.now()
other = Index([now + timedelta(i) for i in range(4)], dtype=object)
result = self.index.union(other)
- expected = np.concatenate((self.index, other))
- self.assert_numpy_array_equal(result, expected)
+ expected = Index(np.concatenate((self.index, other)))
+ self.assert_index_equal(result, expected)
result = other.union(self.index)
- expected = np.concatenate((other, self.index))
- self.assert_numpy_array_equal(result, expected)
+ expected = Index(np.concatenate((other, self.index)))
+ self.assert_index_equal(result, expected)
def test_union(self):
RI = RangeIndex
@@ -789,43 +792,43 @@ def test_slice_specialised(self):
# slice value completion
index = self.index[:]
expected = self.index
- self.assert_numpy_array_equal(index, expected)
+ self.assert_index_equal(index, expected)
# positive slice values
index = self.index[7:10:2]
- expected = np.array([14, 18])
- self.assert_numpy_array_equal(index, expected)
+ expected = Index(np.array([14, 18]), name='foo')
+ self.assert_index_equal(index, expected)
# negative slice values
index = self.index[-1:-5:-2]
- expected = np.array([18, 14])
- self.assert_numpy_array_equal(index, expected)
+ expected = Index(np.array([18, 14]), name='foo')
+ self.assert_index_equal(index, expected)
# stop overshoot
index = self.index[2:100:4]
- expected = np.array([4, 12])
- self.assert_numpy_array_equal(index, expected)
+ expected = Index(np.array([4, 12]), name='foo')
+ self.assert_index_equal(index, expected)
# reverse
index = self.index[::-1]
- expected = self.index.values[::-1]
- self.assert_numpy_array_equal(index, expected)
+ expected = Index(self.index.values[::-1], name='foo')
+ self.assert_index_equal(index, expected)
index = self.index[-8::-1]
- expected = np.array([4, 2, 0])
- self.assert_numpy_array_equal(index, expected)
+ expected = Index(np.array([4, 2, 0]), name='foo')
+ self.assert_index_equal(index, expected)
index = self.index[-40::-1]
- expected = np.array([])
- self.assert_numpy_array_equal(index, expected)
+ expected = Index(np.array([], dtype=np.int64), name='foo')
+ self.assert_index_equal(index, expected)
index = self.index[40::-1]
- expected = self.index.values[40::-1]
- self.assert_numpy_array_equal(index, expected)
+ expected = Index(self.index.values[40::-1], name='foo')
+ self.assert_index_equal(index, expected)
index = self.index[10::-1]
- expected = self.index.values[::-1]
- self.assert_numpy_array_equal(index, expected)
+ expected = Index(self.index.values[::-1], name='foo')
+ self.assert_index_equal(index, expected)
def test_len_specialised(self):
diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py
index 2a2f8678694de..29f3889d20bd0 100644
--- a/pandas/tests/indexing/test_floats.py
+++ b/pandas/tests/indexing/test_floats.py
@@ -538,8 +538,10 @@ def test_slice_float(self):
# getitem
result = idxr(s)[l]
- self.assertTrue(result.equals(expected))
-
+ if isinstance(s, Series):
+ self.assert_series_equal(result, expected)
+ else:
+ self.assert_frame_equal(result, expected)
# setitem
s2 = s.copy()
idxr(s2)[l] = 0
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index e1fd17f0c26e0..b86b248ead290 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -2925,7 +2925,7 @@ def test_dups_fancy_indexing(self):
df.columns = ['a', 'a', 'b']
result = df[['b', 'a']].columns
expected = Index(['b', 'a', 'a'])
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
# across dtypes
df = DataFrame([[1, 2, 1., 2., 3., 'foo', 'bar']],
@@ -3829,7 +3829,7 @@ def test_astype_assignment_with_dups(self):
index = df.index.copy()
df['A'] = df['A'].astype(np.float64)
- self.assertTrue(df.index.equals(index))
+ self.assert_index_equal(df.index, index)
# TODO(wesm): unused variables
# result = df.get_dtype_counts().sort_index()
diff --git a/pandas/tests/series/test_alter_axes.py b/pandas/tests/series/test_alter_axes.py
index 574dcd54933ae..2ddfa27eea377 100644
--- a/pandas/tests/series/test_alter_axes.py
+++ b/pandas/tests/series/test_alter_axes.py
@@ -48,7 +48,7 @@ def test_rename(self):
# partial dict
s = Series(np.arange(4), index=['a', 'b', 'c', 'd'], dtype='int64')
renamed = s.rename({'b': 'foo', 'd': 'bar'})
- self.assert_numpy_array_equal(renamed.index, ['a', 'foo', 'c', 'bar'])
+ self.assert_index_equal(renamed.index, Index(['a', 'foo', 'c', 'bar']))
# index with name
renamer = Series(np.arange(4),
@@ -141,7 +141,7 @@ def test_reset_index(self):
self.assertEqual(len(rs.columns), 2)
rs = s.reset_index(level=[0, 2], drop=True)
- self.assertTrue(rs.index.equals(Index(index.get_level_values(1))))
+ self.assert_index_equal(rs.index, Index(index.get_level_values(1)))
tm.assertIsInstance(rs, Series)
def test_reset_index_range(self):
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index 34aaccb6464aa..c190b0d9e3bb0 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -289,8 +289,8 @@ def test_argsort_stable(self):
mexpected = np.argsort(s.values, kind='mergesort')
qexpected = np.argsort(s.values, kind='quicksort')
- self.assert_numpy_array_equal(mindexer, mexpected)
- self.assert_numpy_array_equal(qindexer, qexpected)
+ self.assert_series_equal(mindexer, Series(mexpected))
+ self.assert_series_equal(qindexer, Series(qexpected))
self.assertFalse(np.array_equal(qindexer, mindexer))
def test_cumsum(self):
@@ -300,24 +300,24 @@ def test_cumprod(self):
self._check_accum_op('cumprod')
def test_cummin(self):
- self.assert_numpy_array_equal(self.ts.cummin(),
+ self.assert_numpy_array_equal(self.ts.cummin().values,
np.minimum.accumulate(np.array(self.ts)))
ts = self.ts.copy()
ts[::2] = np.NaN
result = ts.cummin()[1::2]
expected = np.minimum.accumulate(ts.valid())
- self.assert_numpy_array_equal(result, expected)
+ self.assert_series_equal(result, expected)
def test_cummax(self):
- self.assert_numpy_array_equal(self.ts.cummax(),
+ self.assert_numpy_array_equal(self.ts.cummax().values,
np.maximum.accumulate(np.array(self.ts)))
ts = self.ts.copy()
ts[::2] = np.NaN
result = ts.cummax()[1::2]
expected = np.maximum.accumulate(ts.valid())
- self.assert_numpy_array_equal(result, expected)
+ self.assert_series_equal(result, expected)
def test_cummin_datetime64(self):
s = pd.Series(pd.to_datetime(['NaT', '2000-1-2', 'NaT', '2000-1-1',
@@ -489,7 +489,8 @@ def testit():
def _check_accum_op(self, name):
func = getattr(np, name)
- self.assert_numpy_array_equal(func(self.ts), func(np.array(self.ts)))
+ self.assert_numpy_array_equal(func(self.ts).values,
+ func(np.array(self.ts)))
# with missing values
ts = self.ts.copy()
@@ -498,7 +499,7 @@ def _check_accum_op(self, name):
result = func(ts)[1::2]
expected = func(np.array(ts.valid()))
- self.assert_numpy_array_equal(result, expected)
+ self.assert_numpy_array_equal(result.values, expected)
def test_compress(self):
cond = [True, False, True, False, False]
@@ -1404,13 +1405,13 @@ def test_sort_values(self):
with tm.assert_produces_warning(FutureWarning):
ts.sort()
- self.assert_numpy_array_equal(ts, self.ts.sort_values())
- self.assert_numpy_array_equal(ts.index, self.ts.sort_values().index)
+ self.assert_series_equal(ts, self.ts.sort_values())
+ self.assert_index_equal(ts.index, self.ts.sort_values().index)
ts.sort_values(ascending=False, inplace=True)
- self.assert_numpy_array_equal(ts, self.ts.sort_values(ascending=False))
- self.assert_numpy_array_equal(ts.index, self.ts.sort_values(
- ascending=False).index)
+ self.assert_series_equal(ts, self.ts.sort_values(ascending=False))
+ self.assert_index_equal(ts.index,
+ self.ts.sort_values(ascending=False).index)
# GH 5856/5853
# Series.sort_values operating on a view
@@ -1513,11 +1514,11 @@ def test_order(self):
result = ts.sort_values()
self.assertTrue(np.isnan(result[-5:]).all())
- self.assert_numpy_array_equal(result[:-5], np.sort(vals[5:]))
+ self.assert_numpy_array_equal(result[:-5].values, np.sort(vals[5:]))
result = ts.sort_values(na_position='first')
self.assertTrue(np.isnan(result[:5]).all())
- self.assert_numpy_array_equal(result[5:], np.sort(vals[5:]))
+ self.assert_numpy_array_equal(result[5:].values, np.sort(vals[5:]))
# something object-type
ser = Series(['A', 'B'], [1, 2])
diff --git a/pandas/tests/series/test_apply.py b/pandas/tests/series/test_apply.py
index 9cb1e9dd93d16..26fc80c3ef988 100644
--- a/pandas/tests/series/test_apply.py
+++ b/pandas/tests/series/test_apply.py
@@ -160,7 +160,7 @@ def test_map(self):
# function
result = self.ts.map(lambda x: x * 2)
- self.assert_numpy_array_equal(result, self.ts * 2)
+ self.assert_series_equal(result, self.ts * 2)
# GH 10324
a = Series([1, 2, 3, 4])
diff --git a/pandas/tests/series/test_combine_concat.py b/pandas/tests/series/test_combine_concat.py
index 48224c7bfbd63..eb560d4a17055 100644
--- a/pandas/tests/series/test_combine_concat.py
+++ b/pandas/tests/series/test_combine_concat.py
@@ -49,14 +49,14 @@ def test_combine_first(self):
# nothing used from the input
combined = series.combine_first(series_copy)
- self.assert_numpy_array_equal(combined, series)
+ self.assert_series_equal(combined, series)
# Holes filled from input
combined = series_copy.combine_first(series)
self.assertTrue(np.isfinite(combined).all())
- self.assert_numpy_array_equal(combined[::2], series[::2])
- self.assert_numpy_array_equal(combined[1::2], series_copy[1::2])
+ self.assert_series_equal(combined[::2], series[::2])
+ self.assert_series_equal(combined[1::2], series_copy[1::2])
# mixed types
index = tm.makeStringIndex(20)
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index 68733700e1483..a80a3af56b18f 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -137,7 +137,7 @@ def test_constructor_categorical(self):
cat = pd.Categorical([0, 1, 2, 0, 1, 2], ['a', 'b', 'c'],
fastpath=True)
res = Series(cat)
- self.assertTrue(res.values.equals(cat))
+ tm.assert_categorical_equal(res.values, cat)
# GH12574
self.assertRaises(
@@ -418,8 +418,10 @@ def test_constructor_with_datetime_tz(self):
result = s.values
self.assertIsInstance(result, np.ndarray)
self.assertTrue(result.dtype == 'datetime64[ns]')
- self.assertTrue(dr.equals(pd.DatetimeIndex(result).tz_localize(
- 'UTC').tz_convert(tz=s.dt.tz)))
+
+ exp = pd.DatetimeIndex(result)
+ exp = exp.tz_localize('UTC').tz_convert(tz=s.dt.tz)
+ self.assert_index_equal(dr, exp)
# indexing
result = s.iloc[0]
diff --git a/pandas/tests/series/test_dtypes.py b/pandas/tests/series/test_dtypes.py
index fc963d4597246..6864eac603ded 100644
--- a/pandas/tests/series/test_dtypes.py
+++ b/pandas/tests/series/test_dtypes.py
@@ -55,7 +55,7 @@ def test_astype_cast_object_int(self):
arr = Series(['1', '2', '3', '4'], dtype=object)
result = arr.astype(int)
- self.assert_numpy_array_equal(result, np.arange(1, 5))
+ self.assert_series_equal(result, Series(np.arange(1, 5)))
def test_astype_datetimes(self):
import pandas.tslib as tslib
diff --git a/pandas/tests/series/test_indexing.py b/pandas/tests/series/test_indexing.py
index 29cd887c7075f..d01ac3e1aef42 100644
--- a/pandas/tests/series/test_indexing.py
+++ b/pandas/tests/series/test_indexing.py
@@ -246,7 +246,7 @@ def test_getitem_boolean(self):
result = s[list(mask)]
expected = s[mask]
assert_series_equal(result, expected)
- self.assert_numpy_array_equal(result.index, s.index[mask])
+ self.assert_index_equal(result.index, s.index[mask])
def test_getitem_boolean_empty(self):
s = Series([], dtype=np.int64)
diff --git a/pandas/tests/series/test_io.py b/pandas/tests/series/test_io.py
index 4fda1152abd96..f89501d39f014 100644
--- a/pandas/tests/series/test_io.py
+++ b/pandas/tests/series/test_io.py
@@ -130,7 +130,7 @@ def test_to_frame(self):
assert_frame_equal(rs, xp)
def test_to_dict(self):
- self.assert_numpy_array_equal(Series(self.ts.to_dict()), self.ts)
+ self.assert_series_equal(Series(self.ts.to_dict(), name='ts'), self.ts)
def test_timeseries_periodindex(self):
# GH2891
diff --git a/pandas/tests/series/test_misc_api.py b/pandas/tests/series/test_misc_api.py
index 9f5433782b062..d74966738909d 100644
--- a/pandas/tests/series/test_misc_api.py
+++ b/pandas/tests/series/test_misc_api.py
@@ -206,7 +206,7 @@ def test_keys(self):
self.assertIs(getkeys(), self.ts.index)
def test_values(self):
- self.assert_numpy_array_equal(self.ts, self.ts.values)
+ self.assert_almost_equal(self.ts.values, self.ts, check_dtype=False)
def test_iteritems(self):
for idx, val in compat.iteritems(self.series):
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index e27a21e6d5903..ed10f5b0a7af3 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -247,16 +247,18 @@ def test_isnull_for_inf(self):
def test_fillna(self):
ts = Series([0., 1., 2., 3., 4.], index=tm.makeDateIndex(5))
- self.assert_numpy_array_equal(ts, ts.fillna(method='ffill'))
+ self.assert_series_equal(ts, ts.fillna(method='ffill'))
ts[2] = np.NaN
- self.assert_numpy_array_equal(ts.fillna(method='ffill'),
- [0., 1., 1., 3., 4.])
- self.assert_numpy_array_equal(ts.fillna(method='backfill'),
- [0., 1., 3., 3., 4.])
+ exp = Series([0., 1., 1., 3., 4.], index=ts.index)
+ self.assert_series_equal(ts.fillna(method='ffill'), exp)
- self.assert_numpy_array_equal(ts.fillna(value=5), [0., 1., 5., 3., 4.])
+ exp = Series([0., 1., 3., 3., 4.], index=ts.index)
+ self.assert_series_equal(ts.fillna(method='backfill'), exp)
+
+ exp = Series([0., 1., 5., 3., 4.], index=ts.index)
+ self.assert_series_equal(ts.fillna(value=5), exp)
self.assertRaises(ValueError, ts.fillna)
self.assertRaises(ValueError, self.ts.fillna, value=0, method='ffill')
@@ -488,7 +490,7 @@ def test_interpolate(self):
ts_copy[5:10] = np.NaN
linear_interp = ts_copy.interpolate(method='linear')
- self.assert_numpy_array_equal(linear_interp, ts)
+ self.assert_series_equal(linear_interp, ts)
ord_ts = Series([d.toordinal() for d in self.ts.index],
index=self.ts.index).astype(float)
@@ -497,7 +499,7 @@ def test_interpolate(self):
ord_ts_copy[5:10] = np.NaN
time_interp = ord_ts_copy.interpolate(method='time')
- self.assert_numpy_array_equal(time_interp, ord_ts)
+ self.assert_series_equal(time_interp, ord_ts)
# try time interpolation on a non-TimeSeries
# Only raises ValueError if there are NaNs.
diff --git a/pandas/tests/series/test_operators.py b/pandas/tests/series/test_operators.py
index c5ef969d3b39d..3588faa8b42f1 100644
--- a/pandas/tests/series/test_operators.py
+++ b/pandas/tests/series/test_operators.py
@@ -1227,8 +1227,9 @@ def test_operators_corner(self):
# float + int
int_ts = self.ts.astype(int)[:-5]
added = self.ts + int_ts
- expected = self.ts.values[:-5] + int_ts.values
- self.assert_numpy_array_equal(added[:-5], expected)
+ expected = Series(self.ts.values[:-5] + int_ts.values,
+ index=self.ts.index[:-5], name='ts')
+ self.assert_series_equal(added[:-5], expected)
def test_operators_reverse_object(self):
# GH 56
diff --git a/pandas/tests/series/test_timeseries.py b/pandas/tests/series/test_timeseries.py
index 463063016f1e9..13b95ea97eedf 100644
--- a/pandas/tests/series/test_timeseries.py
+++ b/pandas/tests/series/test_timeseries.py
@@ -492,15 +492,15 @@ def test_asfreq(self):
daily_ts = ts.asfreq('B')
monthly_ts = daily_ts.asfreq('BM')
- self.assert_numpy_array_equal(monthly_ts, ts)
+ self.assert_series_equal(monthly_ts, ts)
daily_ts = ts.asfreq('B', method='pad')
monthly_ts = daily_ts.asfreq('BM')
- self.assert_numpy_array_equal(monthly_ts, ts)
+ self.assert_series_equal(monthly_ts, ts)
daily_ts = ts.asfreq(datetools.bday)
monthly_ts = daily_ts.asfreq(datetools.bmonthEnd)
- self.assert_numpy_array_equal(monthly_ts, ts)
+ self.assert_series_equal(monthly_ts, ts)
result = ts[:0].asfreq('M')
self.assertEqual(len(result), 0)
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 917f108711d09..4758c7f979da0 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -102,14 +102,14 @@ def test_mixed(self):
exp = np.array([0, 0, -1, 1, 2, 3], dtype=np.int_)
self.assert_numpy_array_equal(labels, exp)
- exp = np.array(['A', 'B', 3.14, np.inf], dtype=object)
- self.assert_numpy_array_equal(uniques, exp)
+ exp = pd.Index(['A', 'B', 3.14, np.inf])
+ tm.assert_index_equal(uniques, exp)
labels, uniques = algos.factorize(x, sort=True)
exp = np.array([2, 2, -1, 3, 0, 1], dtype=np.int_)
self.assert_numpy_array_equal(labels, exp)
- exp = np.array([3.14, np.inf, 'A', 'B'], dtype=object)
- self.assert_numpy_array_equal(uniques, exp)
+ exp = pd.Index([3.14, np.inf, 'A', 'B'])
+ tm.assert_index_equal(uniques, exp)
def test_datelike(self):
@@ -121,14 +121,14 @@ def test_datelike(self):
exp = np.array([0, 0, 0, 1, 1, 0], dtype=np.int_)
self.assert_numpy_array_equal(labels, exp)
- exp = np.array([v1.value, v2.value], dtype='M8[ns]')
- self.assert_numpy_array_equal(uniques, exp)
+ exp = pd.DatetimeIndex([v1, v2])
+ self.assert_index_equal(uniques, exp)
labels, uniques = algos.factorize(x, sort=True)
exp = np.array([1, 1, 1, 0, 0, 1], dtype=np.int_)
self.assert_numpy_array_equal(labels, exp)
- exp = np.array([v2.value, v1.value], dtype='M8[ns]')
- self.assert_numpy_array_equal(uniques, exp)
+ exp = pd.DatetimeIndex([v2, v1])
+ self.assert_index_equal(uniques, exp)
# period
v1 = pd.Period('201302', freq='M')
@@ -139,12 +139,12 @@ def test_datelike(self):
labels, uniques = algos.factorize(x)
exp = np.array([0, 0, 0, 1, 1, 0], dtype=np.int_)
self.assert_numpy_array_equal(labels, exp)
- self.assert_numpy_array_equal(uniques, pd.PeriodIndex([v1, v2]))
+ self.assert_index_equal(uniques, pd.PeriodIndex([v1, v2]))
labels, uniques = algos.factorize(x, sort=True)
exp = np.array([0, 0, 0, 1, 1, 0], dtype=np.int_)
self.assert_numpy_array_equal(labels, exp)
- self.assert_numpy_array_equal(uniques, pd.PeriodIndex([v1, v2]))
+ self.assert_index_equal(uniques, pd.PeriodIndex([v1, v2]))
# GH 5986
v1 = pd.to_timedelta('1 day 1 min')
@@ -153,12 +153,12 @@ def test_datelike(self):
labels, uniques = algos.factorize(x)
exp = np.array([0, 1, 0, 0, 1, 1, 0], dtype=np.int_)
self.assert_numpy_array_equal(labels, exp)
- self.assert_numpy_array_equal(uniques, pd.to_timedelta([v1, v2]))
+ self.assert_index_equal(uniques, pd.to_timedelta([v1, v2]))
labels, uniques = algos.factorize(x, sort=True)
exp = np.array([1, 0, 1, 1, 0, 0, 1], dtype=np.int_)
self.assert_numpy_array_equal(labels, exp)
- self.assert_numpy_array_equal(uniques, pd.to_timedelta([v2, v1]))
+ self.assert_index_equal(uniques, pd.to_timedelta([v2, v1]))
def test_factorize_nan(self):
# nan should map to na_sentinel, not reverse_indexer[na_sentinel]
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index 2b28e3b6ed8e0..77ae3ca20d123 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -18,7 +18,6 @@
from pandas.core.base import (FrozenList, FrozenNDArray, PandasDelegate,
NoNewAttributesMixin)
from pandas.tseries.base import DatetimeIndexOpsMixin
-from pandas.util.testing import (assertRaisesRegexp, assertIsInstance)
class CheckStringMixin(object):
@@ -46,7 +45,7 @@ class CheckImmutable(object):
def check_mutable_error(self, *args, **kwargs):
# pass whatever functions you normally would to assertRaises (after the
# Exception kind)
- assertRaisesRegexp(TypeError, self.mutable_regex, *args, **kwargs)
+ tm.assertRaisesRegexp(TypeError, self.mutable_regex, *args, **kwargs)
def test_no_mutable_funcs(self):
def setitem():
@@ -79,7 +78,7 @@ def test_slicing_maintains_type(self):
def check_result(self, result, expected, klass=None):
klass = klass or self.klass
- assertIsInstance(result, klass)
+ self.assertIsInstance(result, klass)
self.assertEqual(result, expected)
@@ -120,13 +119,13 @@ def setUp(self):
def test_shallow_copying(self):
original = self.container.copy()
- assertIsInstance(self.container.view(), FrozenNDArray)
+ self.assertIsInstance(self.container.view(), FrozenNDArray)
self.assertFalse(isinstance(
self.container.view(np.ndarray), FrozenNDArray))
self.assertIsNot(self.container.view(), self.container)
self.assert_numpy_array_equal(self.container, original)
# shallow copy should be the same too
- assertIsInstance(self.container._shallow_copy(), FrozenNDArray)
+ self.assertIsInstance(self.container._shallow_copy(), FrozenNDArray)
# setting should not be allowed
def testit(container):
@@ -141,7 +140,8 @@ def test_values(self):
self.assert_numpy_array_equal(original, vals)
self.assertIsNot(original, vals)
vals[0] = n
- self.assert_numpy_array_equal(self.container, original)
+ self.assertIsInstance(self.container, pd.core.base.FrozenNDArray)
+ self.assert_numpy_array_equal(self.container.values(), original)
self.assertEqual(vals[0], n)
@@ -448,7 +448,9 @@ def test_nanops(self):
self.assertEqual(obj.argmax(), -1)
def test_value_counts_unique_nunique(self):
- for o in self.objs:
+ for orig in self.objs:
+
+ o = orig.copy()
klass = type(o)
values = o.values
@@ -485,13 +487,11 @@ def test_value_counts_unique_nunique(self):
else:
expected_index = pd.Index(values[::-1])
idx = o.index.repeat(range(1, len(o) + 1))
- o = klass(
- np.repeat(values, range(1,
- len(o) + 1)), index=idx, name='a')
+ o = klass(np.repeat(values, range(1, len(o) + 1)),
+ index=idx, name='a')
- expected_s = Series(
- range(10, 0, -
- 1), index=expected_index, dtype='int64', name='a')
+ expected_s = Series(range(10, 0, -1), index=expected_index,
+ dtype='int64', name='a')
result = o.value_counts()
tm.assert_series_equal(result, expected_s)
@@ -501,10 +501,10 @@ def test_value_counts_unique_nunique(self):
result = o.unique()
if isinstance(o, (DatetimeIndex, PeriodIndex)):
self.assertTrue(isinstance(result, o.__class__))
- self.assertEqual(result.name, o.name)
self.assertEqual(result.freq, o.freq)
-
- self.assert_numpy_array_equal(result, values)
+ self.assert_index_equal(result, orig)
+ else:
+ self.assert_numpy_array_equal(result, values)
self.assertEqual(o.nunique(), len(np.unique(o.values)))
@@ -541,9 +541,8 @@ def test_value_counts_unique_nunique(self):
# resets name from Index
expected_index = pd.Index(o, name=None)
# attach name to klass
- o = klass(
- np.repeat(values, range(
- 1, len(o) + 1)), freq=o.freq, name='a')
+ o = klass(np.repeat(values, range(1, len(o) + 1)),
+ freq=o.freq, name='a')
elif isinstance(o, Index):
expected_index = pd.Index(values, name=None)
o = klass(
@@ -610,6 +609,12 @@ def test_value_counts_inferred(self):
expected = Series([.4, .3, .2, .1], index=['b', 'a', 'd', 'c'])
tm.assert_series_equal(hist, expected)
+ def test_value_counts_bins(self):
+ klasses = [Index, Series]
+ for klass in klasses:
+ s_values = ['a', 'b', 'b', 'b', 'b', 'c', 'd', 'd', 'a', 'a']
+ s = klass(s_values)
+
# bins
self.assertRaises(TypeError,
lambda bins: s.value_counts(bins=bins), 1)
@@ -660,6 +665,9 @@ def test_value_counts_inferred(self):
check_dtype=False)
self.assertEqual(s.nunique(), 0)
+ def test_value_counts_datetime64(self):
+ klasses = [Index, Series]
+ for klass in klasses:
# GH 3002, datetime64[ns]
# don't test names though
txt = "\n".join(['xxyyzz20100101PIE', 'xxyyzz20100101GUM',
@@ -673,9 +681,9 @@ def test_value_counts_inferred(self):
s = klass(df['dt'].copy())
s.name = None
- idx = pd.to_datetime(
- ['2010-01-01 00:00:00Z', '2008-09-09 00:00:00Z',
- '2009-01-01 00:00:00X'])
+ idx = pd.to_datetime(['2010-01-01 00:00:00Z',
+ '2008-09-09 00:00:00Z',
+ '2009-01-01 00:00:00X'])
expected_s = Series([3, 2, 1], index=idx)
tm.assert_series_equal(s.value_counts(), expected_s)
@@ -684,8 +692,7 @@ def test_value_counts_inferred(self):
'2008-09-09 00:00:00Z'],
dtype='datetime64[ns]')
if isinstance(s, DatetimeIndex):
- expected = DatetimeIndex(expected)
- self.assertTrue(s.unique().equals(expected))
+ self.assert_index_equal(s.unique(), DatetimeIndex(expected))
else:
self.assert_numpy_array_equal(s.unique(), expected)
@@ -707,9 +714,12 @@ def test_value_counts_inferred(self):
self.assertEqual(unique.dtype, 'datetime64[ns]')
# numpy_array_equal cannot compare pd.NaT
- self.assert_numpy_array_equal(unique[:3], expected)
- self.assertTrue(unique[3] is pd.NaT or unique[3].astype('int64') ==
- pd.tslib.iNaT)
+ if isinstance(s, DatetimeIndex):
+ self.assert_index_equal(unique[:3], DatetimeIndex(expected))
+ else:
+ self.assert_numpy_array_equal(unique[:3], expected)
+ self.assertTrue(unique[3] is pd.NaT or
+ unique[3].astype('int64') == pd.tslib.iNaT)
self.assertEqual(s.nunique(), 3)
self.assertEqual(s.nunique(dropna=False), 4)
@@ -722,9 +732,9 @@ def test_value_counts_inferred(self):
expected_s = Series([6], index=[Timedelta('1day')], name='dt')
tm.assert_series_equal(result, expected_s)
- expected = TimedeltaIndex(['1 days'])
+ expected = TimedeltaIndex(['1 days'], name='dt')
if isinstance(td, TimedeltaIndex):
- self.assertTrue(td.unique().equals(expected))
+ self.assert_index_equal(td.unique(), expected)
else:
self.assert_numpy_array_equal(td.unique(), expected.values)
@@ -734,7 +744,8 @@ def test_value_counts_inferred(self):
tm.assert_series_equal(result2, expected_s)
def test_factorize(self):
- for o in self.objs:
+ for orig in self.objs:
+ o = orig.copy()
if isinstance(o, Index) and o.is_boolean():
exp_arr = np.array([0, 1] + [0] * 8)
@@ -747,12 +758,16 @@ def test_factorize(self):
self.assert_numpy_array_equal(labels, exp_arr)
if isinstance(o, Series):
- expected = Index(o.values)
- self.assert_numpy_array_equal(uniques, expected)
+ self.assert_index_equal(uniques, Index(orig),
+ check_names=False)
else:
- self.assertTrue(uniques.equals(exp_uniques))
+ # factorize explicitly resets name
+ self.assert_index_equal(uniques, exp_uniques,
+ check_names=False)
- for o in self.objs:
+ def test_factorize_repeated(self):
+ for orig in self.objs:
+ o = orig.copy()
# don't test boolean
if isinstance(o, Index) and o.is_boolean():
@@ -772,27 +787,25 @@ def test_factorize(self):
self.assert_numpy_array_equal(labels, exp_arr)
if isinstance(o, Series):
- expected = Index(o.values)
- self.assert_numpy_array_equal(uniques, expected)
+ self.assert_index_equal(uniques, Index(orig).sort_values(),
+ check_names=False)
else:
- self.assertTrue(uniques.equals(o))
+ self.assert_index_equal(uniques, o, check_names=False)
exp_arr = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4])
labels, uniques = n.factorize(sort=False)
self.assert_numpy_array_equal(labels, exp_arr)
if isinstance(o, Series):
- expected = Index(np.concatenate([o.values[5:10], o.values[:5]
- ]))
- self.assert_numpy_array_equal(uniques, expected)
+ expected = Index(o.iloc[5:10].append(o.iloc[:5]))
+ self.assert_index_equal(uniques, expected, check_names=False)
else:
- expected = o[5:].append(o[:5])
- self.assertTrue(uniques.equals(expected))
+ expected = o[5:10].append(o[:5])
+ self.assert_index_equal(uniques, expected, check_names=False)
- def test_duplicated_drop_duplicates(self):
+ def test_duplicated_drop_duplicates_index(self):
# GH 4060
for original in self.objs:
-
if isinstance(original, Index):
# special case
diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py
index d74fe68617ea2..cff5bbe14f1eb 100644
--- a/pandas/tests/test_categorical.py
+++ b/pandas/tests/test_categorical.py
@@ -34,10 +34,12 @@ def test_getitem(self):
self.assertEqual(self.factor[-1], 'c')
subf = self.factor[[0, 1, 2]]
- tm.assert_almost_equal(subf._codes, [0, 1, 1])
+ tm.assert_numpy_array_equal(subf._codes,
+ np.array([0, 1, 1], dtype=np.int8))
subf = self.factor[np.asarray(self.factor) == 'c']
- tm.assert_almost_equal(subf._codes, [2, 2, 2])
+ tm.assert_numpy_array_equal(subf._codes,
+ np.array([2, 2, 2], dtype=np.int8))
def test_getitem_listlike(self):
@@ -157,39 +159,39 @@ def f():
# Categorical as input
c1 = Categorical(["a", "b", "c", "a"])
c2 = Categorical(c1)
- self.assertTrue(c1.equals(c2))
+ tm.assert_categorical_equal(c1, c2)
c1 = Categorical(["a", "b", "c", "a"], categories=["a", "b", "c", "d"])
c2 = Categorical(c1)
- self.assertTrue(c1.equals(c2))
+ tm.assert_categorical_equal(c1, c2)
c1 = Categorical(["a", "b", "c", "a"], categories=["a", "c", "b"])
c2 = Categorical(c1)
- self.assertTrue(c1.equals(c2))
+ tm.assert_categorical_equal(c1, c2)
c1 = Categorical(["a", "b", "c", "a"], categories=["a", "c", "b"])
c2 = Categorical(c1, categories=["a", "b", "c"])
self.assert_numpy_array_equal(c1.__array__(), c2.__array__())
- self.assert_numpy_array_equal(c2.categories, np.array(["a", "b", "c"]))
+ self.assert_index_equal(c2.categories, Index(["a", "b", "c"]))
# Series of dtype category
c1 = Categorical(["a", "b", "c", "a"], categories=["a", "b", "c", "d"])
c2 = Categorical(Series(c1))
- self.assertTrue(c1.equals(c2))
+ tm.assert_categorical_equal(c1, c2)
c1 = Categorical(["a", "b", "c", "a"], categories=["a", "c", "b"])
c2 = Categorical(Series(c1))
- self.assertTrue(c1.equals(c2))
+ tm.assert_categorical_equal(c1, c2)
# Series
c1 = Categorical(["a", "b", "c", "a"])
c2 = Categorical(Series(["a", "b", "c", "a"]))
- self.assertTrue(c1.equals(c2))
+ tm.assert_categorical_equal(c1, c2)
c1 = Categorical(["a", "b", "c", "a"], categories=["a", "b", "c", "d"])
- c2 = Categorical(
- Series(["a", "b", "c", "a"]), categories=["a", "b", "c", "d"])
- self.assertTrue(c1.equals(c2))
+ c2 = Categorical(Series(["a", "b", "c", "a"]),
+ categories=["a", "b", "c", "d"])
+ tm.assert_categorical_equal(c1, c2)
# This should result in integer categories, not float!
cat = pd.Categorical([1, 2, 3, np.nan], categories=[1, 2, 3])
@@ -281,11 +283,12 @@ def f():
def test_constructor_with_index(self):
ci = CategoricalIndex(list('aabbca'), categories=list('cab'))
- self.assertTrue(ci.values.equals(Categorical(ci)))
+ tm.assert_categorical_equal(ci.values, Categorical(ci))
ci = CategoricalIndex(list('aabbca'), categories=list('cab'))
- self.assertTrue(ci.values.equals(Categorical(
- ci.astype(object), categories=ci.categories)))
+ tm.assert_categorical_equal(ci.values,
+ Categorical(ci.astype(object),
+ categories=ci.categories))
def test_constructor_with_generator(self):
# This was raising an Error in isnull(single_val).any() because isnull
@@ -294,9 +297,9 @@ def test_constructor_with_generator(self):
exp = Categorical([0, 1, 2])
cat = Categorical((x for x in [0, 1, 2]))
- self.assertTrue(cat.equals(exp))
+ tm.assert_categorical_equal(cat, exp)
cat = Categorical(xrange(3))
- self.assertTrue(cat.equals(exp))
+ tm.assert_categorical_equal(cat, exp)
# This uses xrange internally
from pandas.core.index import MultiIndex
@@ -304,9 +307,9 @@ def test_constructor_with_generator(self):
# check that categories accept generators and sequences
cat = pd.Categorical([0, 1, 2], categories=(x for x in [0, 1, 2]))
- self.assertTrue(cat.equals(exp))
+ tm.assert_categorical_equal(cat, exp)
cat = pd.Categorical([0, 1, 2], categories=xrange(3))
- self.assertTrue(cat.equals(exp))
+ tm.assert_categorical_equal(cat, exp)
def test_constructor_with_datetimelike(self):
@@ -393,7 +396,7 @@ def f():
exp = Categorical(["a", "b", "c"], ordered=False)
res = Categorical.from_codes([0, 1, 2], ["a", "b", "c"])
- self.assertTrue(exp.equals(res))
+ tm.assert_categorical_equal(exp, res)
# Not available in earlier numpy versions
if hasattr(np.random, "choice"):
@@ -404,27 +407,27 @@ def test_comparisons(self):
result = self.factor[self.factor == 'a']
expected = self.factor[np.asarray(self.factor) == 'a']
- self.assertTrue(result.equals(expected))
+ tm.assert_categorical_equal(result, expected)
result = self.factor[self.factor != 'a']
expected = self.factor[np.asarray(self.factor) != 'a']
- self.assertTrue(result.equals(expected))
+ tm.assert_categorical_equal(result, expected)
result = self.factor[self.factor < 'c']
expected = self.factor[np.asarray(self.factor) < 'c']
- self.assertTrue(result.equals(expected))
+ tm.assert_categorical_equal(result, expected)
result = self.factor[self.factor > 'a']
expected = self.factor[np.asarray(self.factor) > 'a']
- self.assertTrue(result.equals(expected))
+ tm.assert_categorical_equal(result, expected)
result = self.factor[self.factor >= 'b']
expected = self.factor[np.asarray(self.factor) >= 'b']
- self.assertTrue(result.equals(expected))
+ tm.assert_categorical_equal(result, expected)
result = self.factor[self.factor <= 'b']
expected = self.factor[np.asarray(self.factor) <= 'b']
- self.assertTrue(result.equals(expected))
+ tm.assert_categorical_equal(result, expected)
n = len(self.factor)
@@ -551,7 +554,7 @@ def test_na_flags_int_categories(self):
def test_categories_none(self):
factor = Categorical(['a', 'b', 'b', 'a',
'a', 'c', 'c', 'c'], ordered=True)
- self.assertTrue(factor.equals(self.factor))
+ tm.assert_categorical_equal(factor, self.factor)
def test_describe(self):
# string type
@@ -710,7 +713,7 @@ def test_periodindex(self):
exp_arr = np.array([0, 0, 1, 1, 2, 2], dtype=np.int8)
exp_idx = PeriodIndex(['2014-01', '2014-02', '2014-03'], freq='M')
self.assert_numpy_array_equal(cat1._codes, exp_arr)
- self.assertTrue(cat1.categories.equals(exp_idx))
+ self.assert_index_equal(cat1.categories, exp_idx)
idx2 = PeriodIndex(['2014-03', '2014-03', '2014-02', '2014-01',
'2014-03', '2014-01'], freq='M')
@@ -719,7 +722,7 @@ def test_periodindex(self):
exp_arr = np.array([2, 2, 1, 0, 2, 0], dtype=np.int8)
exp_idx2 = PeriodIndex(['2014-01', '2014-02', '2014-03'], freq='M')
self.assert_numpy_array_equal(cat2._codes, exp_arr)
- self.assertTrue(cat2.categories.equals(exp_idx2))
+ self.assert_index_equal(cat2.categories, exp_idx2)
idx3 = PeriodIndex(['2013-12', '2013-11', '2013-10', '2013-09',
'2013-08', '2013-07', '2013-05'], freq='M')
@@ -728,15 +731,14 @@ def test_periodindex(self):
exp_idx = PeriodIndex(['2013-05', '2013-07', '2013-08', '2013-09',
'2013-10', '2013-11', '2013-12'], freq='M')
self.assert_numpy_array_equal(cat3._codes, exp_arr)
- self.assertTrue(cat3.categories.equals(exp_idx))
+ self.assert_index_equal(cat3.categories, exp_idx)
def test_categories_assigments(self):
s = pd.Categorical(["a", "b", "c", "a"])
exp = np.array([1, 2, 3, 1], dtype=np.int64)
s.categories = [1, 2, 3]
self.assert_numpy_array_equal(s.__array__(), exp)
- self.assert_numpy_array_equal(s.categories,
- np.array([1, 2, 3], dtype=np.int64))
+ self.assert_index_equal(s.categories, Index([1, 2, 3]))
# lengthen
def f():
@@ -762,21 +764,21 @@ def test_construction_with_ordered(self):
def test_ordered_api(self):
# GH 9347
cat1 = pd.Categorical(["a", "c", "b"], ordered=False)
- self.assertTrue(cat1.categories.equals(Index(['a', 'b', 'c'])))
+ self.assert_index_equal(cat1.categories, Index(['a', 'b', 'c']))
self.assertFalse(cat1.ordered)
cat2 = pd.Categorical(["a", "c", "b"], categories=['b', 'c', 'a'],
ordered=False)
- self.assertTrue(cat2.categories.equals(Index(['b', 'c', 'a'])))
+ self.assert_index_equal(cat2.categories, Index(['b', 'c', 'a']))
self.assertFalse(cat2.ordered)
cat3 = pd.Categorical(["a", "c", "b"], ordered=True)
- self.assertTrue(cat3.categories.equals(Index(['a', 'b', 'c'])))
+ self.assert_index_equal(cat3.categories, Index(['a', 'b', 'c']))
self.assertTrue(cat3.ordered)
cat4 = pd.Categorical(["a", "c", "b"], categories=['b', 'c', 'a'],
ordered=True)
- self.assertTrue(cat4.categories.equals(Index(['b', 'c', 'a'])))
+ self.assert_index_equal(cat4.categories, Index(['b', 'c', 'a']))
self.assertTrue(cat4.ordered)
def test_set_ordered(self):
@@ -808,21 +810,21 @@ def test_set_ordered(self):
def test_set_categories(self):
cat = Categorical(["a", "b", "c", "a"], ordered=True)
- exp_categories = np.array(["c", "b", "a"], dtype=np.object_)
+ exp_categories = Index(["c", "b", "a"])
exp_values = np.array(["a", "b", "c", "a"], dtype=np.object_)
res = cat.set_categories(["c", "b", "a"], inplace=True)
- self.assert_numpy_array_equal(cat.categories, exp_categories)
+ self.assert_index_equal(cat.categories, exp_categories)
self.assert_numpy_array_equal(cat.__array__(), exp_values)
self.assertIsNone(res)
res = cat.set_categories(["a", "b", "c"])
# cat must be the same as before
- self.assert_numpy_array_equal(cat.categories, exp_categories)
+ self.assert_index_equal(cat.categories, exp_categories)
self.assert_numpy_array_equal(cat.__array__(), exp_values)
# only res is changed
- exp_categories_back = np.array(["a", "b", "c"])
- self.assert_numpy_array_equal(res.categories, exp_categories_back)
+ exp_categories_back = Index(["a", "b", "c"])
+ self.assert_index_equal(res.categories, exp_categories_back)
self.assert_numpy_array_equal(res.__array__(), exp_values)
# not all "old" included in "new" -> all not included ones are now
@@ -836,19 +838,18 @@ def test_set_categories(self):
res = cat.set_categories(["a", "b", "d"])
self.assert_numpy_array_equal(res.codes,
np.array([0, 1, -1, 0], dtype=np.int8))
- self.assert_numpy_array_equal(res.categories,
- np.array(["a", "b", "d"]))
+ self.assert_index_equal(res.categories, Index(["a", "b", "d"]))
# all "old" included in "new"
cat = cat.set_categories(["a", "b", "c", "d"])
- exp_categories = np.array(["a", "b", "c", "d"], dtype=np.object_)
- self.assert_numpy_array_equal(cat.categories, exp_categories)
+ exp_categories = Index(["a", "b", "c", "d"])
+ self.assert_index_equal(cat.categories, exp_categories)
# internals...
c = Categorical([1, 2, 3, 4, 1], categories=[1, 2, 3, 4], ordered=True)
self.assert_numpy_array_equal(c._codes,
np.array([0, 1, 2, 3, 0], dtype=np.int8))
- self.assert_numpy_array_equal(c.categories, np.array([1, 2, 3, 4]))
+ self.assert_index_equal(c.categories, Index([1, 2, 3, 4]))
exp = np.array([1, 2, 3, 4, 1], dtype=np.int64)
self.assert_numpy_array_equal(c.get_values(), exp)
@@ -861,7 +862,7 @@ def test_set_categories(self):
np.array([3, 2, 1, 0, 3], dtype=np.int8))
# categories are now in new order
- self.assert_numpy_array_equal(c.categories, np.array([4, 3, 2, 1]))
+ self.assert_index_equal(c.categories, Index([4, 3, 2, 1]))
# output is the same
exp = np.array([1, 2, 3, 4, 1], dtype=np.int64)
@@ -886,22 +887,20 @@ def test_rename_categories(self):
res = cat.rename_categories([1, 2, 3])
self.assert_numpy_array_equal(res.__array__(),
np.array([1, 2, 3, 1], dtype=np.int64))
- self.assert_numpy_array_equal(res.categories,
- np.array([1, 2, 3], dtype=np.int64))
+ self.assert_index_equal(res.categories, Index([1, 2, 3]))
exp_cat = np.array(["a", "b", "c", "a"], dtype=np.object_)
self.assert_numpy_array_equal(cat.__array__(), exp_cat)
- exp_cat = np.array(["a", "b", "c"], dtype=np.object_)
- self.assert_numpy_array_equal(cat.categories, exp_cat)
+ exp_cat = Index(["a", "b", "c"])
+ self.assert_index_equal(cat.categories, exp_cat)
res = cat.rename_categories([1, 2, 3], inplace=True)
# and now inplace
self.assertIsNone(res)
self.assert_numpy_array_equal(cat.__array__(),
np.array([1, 2, 3, 1], dtype=np.int64))
- self.assert_numpy_array_equal(cat.categories,
- np.array([1, 2, 3], dtype=np.int64))
+ self.assert_index_equal(cat.categories, Index([1, 2, 3]))
# lengthen
def f():
@@ -1025,14 +1024,14 @@ def test_remove_unused_categories(self):
exp_categories_all = Index(["a", "b", "c", "d", "e"])
exp_categories_dropped = Index(["a", "b", "c", "d"])
- self.assert_numpy_array_equal(c.categories, exp_categories_all)
+ self.assert_index_equal(c.categories, exp_categories_all)
res = c.remove_unused_categories()
self.assert_index_equal(res.categories, exp_categories_dropped)
self.assert_index_equal(c.categories, exp_categories_all)
res = c.remove_unused_categories(inplace=True)
- self.assert_numpy_array_equal(c.categories, exp_categories_dropped)
+ self.assert_index_equal(c.categories, exp_categories_dropped)
self.assertIsNone(res)
# with NaN values (GH11599)
@@ -1065,11 +1064,11 @@ def test_nan_handling(self):
# Nans are represented as -1 in codes
c = Categorical(["a", "b", np.nan, "a"])
- self.assert_numpy_array_equal(c.categories, np.array(["a", "b"]))
+ self.assert_index_equal(c.categories, Index(["a", "b"]))
self.assert_numpy_array_equal(c._codes,
np.array([0, 1, -1, 0], dtype=np.int8))
c[1] = np.nan
- self.assert_numpy_array_equal(c.categories, np.array(["a", "b"]))
+ self.assert_index_equal(c.categories, Index(["a", "b"]))
self.assert_numpy_array_equal(c._codes,
np.array([0, -1, -1, 0], dtype=np.int8))
@@ -1078,15 +1077,11 @@ def test_nan_handling(self):
with tm.assert_produces_warning(FutureWarning):
c = Categorical(["a", "b", np.nan, "a"],
categories=["a", "b", np.nan])
- self.assert_numpy_array_equal(c.categories,
- np.array(["a", "b", np.nan],
- dtype=np.object_))
+ self.assert_index_equal(c.categories, Index(["a", "b", np.nan]))
self.assert_numpy_array_equal(c._codes,
np.array([0, 1, 2, 0], dtype=np.int8))
c[1] = np.nan
- self.assert_numpy_array_equal(c.categories,
- np.array(["a", "b", np.nan],
- dtype=np.object_))
+ self.assert_index_equal(c.categories, Index(["a", "b", np.nan]))
self.assert_numpy_array_equal(c._codes,
np.array([0, 2, 2, 0], dtype=np.int8))
@@ -1095,30 +1090,24 @@ def test_nan_handling(self):
with tm.assert_produces_warning(FutureWarning):
c.categories = ["a", "b", np.nan] # noqa
- self.assert_numpy_array_equal(c.categories,
- np.array(["a", "b", np.nan],
- dtype=np.object_))
+ self.assert_index_equal(c.categories, Index(["a", "b", np.nan]))
self.assert_numpy_array_equal(c._codes,
np.array([0, 1, 2, 0], dtype=np.int8))
# Adding nan to categories should make assigned nan point to the
# category!
c = Categorical(["a", "b", np.nan, "a"])
- self.assert_numpy_array_equal(c.categories, np.array(["a", "b"]))
+ self.assert_index_equal(c.categories, Index(["a", "b"]))
self.assert_numpy_array_equal(c._codes,
np.array([0, 1, -1, 0], dtype=np.int8))
with tm.assert_produces_warning(FutureWarning):
c.set_categories(["a", "b", np.nan], rename=True, inplace=True)
- self.assert_numpy_array_equal(c.categories,
- np.array(["a", "b", np.nan],
- dtype=np.object_))
+ self.assert_index_equal(c.categories, Index(["a", "b", np.nan]))
self.assert_numpy_array_equal(c._codes,
np.array([0, 1, -1, 0], dtype=np.int8))
c[1] = np.nan
- self.assert_numpy_array_equal(c.categories,
- np.array(["a", "b", np.nan],
- dtype=np.object_))
+ self.assert_index_equal(c.categories, Index(["a", "b", np.nan]))
self.assert_numpy_array_equal(c._codes,
np.array([0, 2, -1, 0], dtype=np.int8))
@@ -1244,63 +1233,58 @@ def test_min_max(self):
def test_unique(self):
# categories are reordered based on value when ordered=False
cat = Categorical(["a", "b"])
- exp = np.asarray(["a", "b"])
+ exp = Index(["a", "b"])
res = cat.unique()
- self.assert_numpy_array_equal(res, exp)
+ self.assert_index_equal(res.categories, exp)
+ self.assert_categorical_equal(res, cat)
cat = Categorical(["a", "b", "a", "a"], categories=["a", "b", "c"])
res = cat.unique()
- self.assert_numpy_array_equal(res, exp)
+ self.assert_index_equal(res.categories, exp)
tm.assert_categorical_equal(res, Categorical(exp))
cat = Categorical(["c", "a", "b", "a", "a"],
categories=["a", "b", "c"])
- exp = np.asarray(["c", "a", "b"])
+ exp = Index(["c", "a", "b"])
res = cat.unique()
- self.assert_numpy_array_equal(res, exp)
- tm.assert_categorical_equal(res, Categorical(
- exp, categories=['c', 'a', 'b']))
+ self.assert_index_equal(res.categories, exp)
+ exp_cat = Categorical(exp, categories=['c', 'a', 'b'])
+ tm.assert_categorical_equal(res, exp_cat)
# nan must be removed
cat = Categorical(["b", np.nan, "b", np.nan, "a"],
categories=["a", "b", "c"])
res = cat.unique()
- exp = np.asarray(["b", np.nan, "a"], dtype=object)
- self.assert_numpy_array_equal(res, exp)
- tm.assert_categorical_equal(res, Categorical(
- ["b", np.nan, "a"], categories=["b", "a"]))
+ exp = Index(["b", "a"])
+ self.assert_index_equal(res.categories, exp)
+ exp_cat = Categorical(["b", np.nan, "a"], categories=["b", "a"])
+ tm.assert_categorical_equal(res, exp_cat)
def test_unique_ordered(self):
# keep categories order when ordered=True
cat = Categorical(['b', 'a', 'b'], categories=['a', 'b'], ordered=True)
res = cat.unique()
- exp = np.asarray(['b', 'a'])
- exp_cat = Categorical(exp, categories=['a', 'b'], ordered=True)
- self.assert_numpy_array_equal(res, exp)
+ exp_cat = Categorical(['b', 'a'], categories=['a', 'b'], ordered=True)
tm.assert_categorical_equal(res, exp_cat)
cat = Categorical(['c', 'b', 'a', 'a'], categories=['a', 'b', 'c'],
ordered=True)
res = cat.unique()
- exp = np.asarray(['c', 'b', 'a'])
- exp_cat = Categorical(exp, categories=['a', 'b', 'c'], ordered=True)
- self.assert_numpy_array_equal(res, exp)
+ exp_cat = Categorical(['c', 'b', 'a'], categories=['a', 'b', 'c'],
+ ordered=True)
tm.assert_categorical_equal(res, exp_cat)
cat = Categorical(['b', 'a', 'a'], categories=['a', 'b', 'c'],
ordered=True)
res = cat.unique()
- exp = np.asarray(['b', 'a'])
- exp_cat = Categorical(exp, categories=['a', 'b'], ordered=True)
- self.assert_numpy_array_equal(res, exp)
+ exp_cat = Categorical(['b', 'a'], categories=['a', 'b'], ordered=True)
tm.assert_categorical_equal(res, exp_cat)
cat = Categorical(['b', 'b', np.nan, 'a'], categories=['a', 'b', 'c'],
ordered=True)
res = cat.unique()
- exp = np.asarray(['b', np.nan, 'a'], dtype=object)
- exp_cat = Categorical(exp, categories=['a', 'b'], ordered=True)
- self.assert_numpy_array_equal(res, exp)
+ exp_cat = Categorical(['b', np.nan, 'a'], categories=['a', 'b'],
+ ordered=True)
tm.assert_categorical_equal(res, exp_cat)
def test_mode(self):
@@ -1308,33 +1292,33 @@ def test_mode(self):
ordered=True)
res = s.mode()
exp = Categorical([5], categories=[5, 4, 3, 2, 1], ordered=True)
- self.assertTrue(res.equals(exp))
+ tm.assert_categorical_equal(res, exp)
s = Categorical([1, 1, 1, 4, 5, 5, 5], categories=[5, 4, 3, 2, 1],
ordered=True)
res = s.mode()
exp = Categorical([5, 1], categories=[5, 4, 3, 2, 1], ordered=True)
- self.assertTrue(res.equals(exp))
+ tm.assert_categorical_equal(res, exp)
s = Categorical([1, 2, 3, 4, 5], categories=[5, 4, 3, 2, 1],
ordered=True)
res = s.mode()
exp = Categorical([], categories=[5, 4, 3, 2, 1], ordered=True)
- self.assertTrue(res.equals(exp))
+ tm.assert_categorical_equal(res, exp)
# NaN should not become the mode!
s = Categorical([np.nan, np.nan, np.nan, 4, 5],
categories=[5, 4, 3, 2, 1], ordered=True)
res = s.mode()
exp = Categorical([], categories=[5, 4, 3, 2, 1], ordered=True)
- self.assertTrue(res.equals(exp))
+ tm.assert_categorical_equal(res, exp)
s = Categorical([np.nan, np.nan, np.nan, 4, 5, 4],
categories=[5, 4, 3, 2, 1], ordered=True)
res = s.mode()
exp = Categorical([4], categories=[5, 4, 3, 2, 1], ordered=True)
- self.assertTrue(res.equals(exp))
+ tm.assert_categorical_equal(res, exp)
s = Categorical([np.nan, np.nan, 4, 5, 4], categories=[5, 4, 3, 2, 1],
ordered=True)
res = s.mode()
exp = Categorical([4], categories=[5, 4, 3, 2, 1], ordered=True)
- self.assertTrue(res.equals(exp))
+ tm.assert_categorical_equal(res, exp)
def test_sort_values(self):
@@ -1348,74 +1332,78 @@ def test_sort_values(self):
res = cat.sort_values()
exp = np.array(["a", "b", "c", "d"], dtype=object)
self.assert_numpy_array_equal(res.__array__(), exp)
+ self.assert_index_equal(res.categories, cat.categories)
cat = Categorical(["a", "c", "b", "d"],
categories=["a", "b", "c", "d"], ordered=True)
res = cat.sort_values()
exp = np.array(["a", "b", "c", "d"], dtype=object)
self.assert_numpy_array_equal(res.__array__(), exp)
+ self.assert_index_equal(res.categories, cat.categories)
res = cat.sort_values(ascending=False)
exp = np.array(["d", "c", "b", "a"], dtype=object)
self.assert_numpy_array_equal(res.__array__(), exp)
+ self.assert_index_equal(res.categories, cat.categories)
# sort (inplace order)
cat1 = cat.copy()
cat1.sort_values(inplace=True)
exp = np.array(["a", "b", "c", "d"], dtype=object)
self.assert_numpy_array_equal(cat1.__array__(), exp)
+ self.assert_index_equal(res.categories, cat.categories)
# reverse
cat = Categorical(["a", "c", "c", "b", "d"], ordered=True)
res = cat.sort_values(ascending=False)
exp_val = np.array(["d", "c", "c", "b", "a"], dtype=object)
- exp_categories = np.array(["a", "b", "c", "d"], dtype=object)
+ exp_categories = Index(["a", "b", "c", "d"])
self.assert_numpy_array_equal(res.__array__(), exp_val)
- self.assert_numpy_array_equal(res.categories, exp_categories)
+ self.assert_index_equal(res.categories, exp_categories)
def test_sort_values_na_position(self):
# see gh-12882
cat = Categorical([5, 2, np.nan, 2, np.nan], ordered=True)
- exp_categories = np.array([2, 5])
+ exp_categories = Index([2, 5])
exp = np.array([2.0, 2.0, 5.0, np.nan, np.nan])
res = cat.sort_values() # default arguments
self.assert_numpy_array_equal(res.__array__(), exp)
- self.assert_numpy_array_equal(res.categories, exp_categories)
+ self.assert_index_equal(res.categories, exp_categories)
exp = np.array([np.nan, np.nan, 2.0, 2.0, 5.0])
res = cat.sort_values(ascending=True, na_position='first')
self.assert_numpy_array_equal(res.__array__(), exp)
- self.assert_numpy_array_equal(res.categories, exp_categories)
+ self.assert_index_equal(res.categories, exp_categories)
exp = np.array([np.nan, np.nan, 5.0, 2.0, 2.0])
res = cat.sort_values(ascending=False, na_position='first')
self.assert_numpy_array_equal(res.__array__(), exp)
- self.assert_numpy_array_equal(res.categories, exp_categories)
+ self.assert_index_equal(res.categories, exp_categories)
exp = np.array([2.0, 2.0, 5.0, np.nan, np.nan])
res = cat.sort_values(ascending=True, na_position='last')
self.assert_numpy_array_equal(res.__array__(), exp)
- self.assert_numpy_array_equal(res.categories, exp_categories)
+ self.assert_index_equal(res.categories, exp_categories)
exp = np.array([5.0, 2.0, 2.0, np.nan, np.nan])
res = cat.sort_values(ascending=False, na_position='last')
self.assert_numpy_array_equal(res.__array__(), exp)
- self.assert_numpy_array_equal(res.categories, exp_categories)
+ self.assert_index_equal(res.categories, exp_categories)
cat = Categorical(["a", "c", "b", "d", np.nan], ordered=True)
res = cat.sort_values(ascending=False, na_position='last')
exp_val = np.array(["d", "c", "b", "a", np.nan], dtype=object)
- exp_categories = np.array(["a", "b", "c", "d"], dtype=object)
+ exp_categories = Index(["a", "b", "c", "d"])
self.assert_numpy_array_equal(res.__array__(), exp_val)
- self.assert_numpy_array_equal(res.categories, exp_categories)
+ self.assert_index_equal(res.categories, exp_categories)
cat = Categorical(["a", "c", "b", "d", np.nan], ordered=True)
res = cat.sort_values(ascending=False, na_position='first')
exp_val = np.array([np.nan, "d", "c", "b", "a"], dtype=object)
- exp_categories = np.array(["a", "b", "c", "d"], dtype=object)
+ exp_categories = Index(["a", "b", "c", "d"])
self.assert_numpy_array_equal(res.__array__(), exp_val)
- self.assert_numpy_array_equal(res.categories, exp_categories)
+ self.assert_index_equal(res.categories, exp_categories)
def test_slicing_directly(self):
cat = Categorical(["a", "b", "c", "d", "a", "b", "c"])
@@ -1430,7 +1418,7 @@ def test_set_item_nan(self):
cat = pd.Categorical([1, 2, 3])
exp = pd.Categorical([1, np.nan, 3], categories=[1, 2, 3])
cat[1] = np.nan
- self.assertTrue(cat.equals(exp))
+ tm.assert_categorical_equal(cat, exp)
# if nan in categories, the proper code should be set!
cat = pd.Categorical([1, 2, 3, np.nan], categories=[1, 2, 3])
@@ -1570,10 +1558,10 @@ def test_deprecated_levels(self):
exp = cat.categories
with tm.assert_produces_warning(FutureWarning):
res = cat.levels
- self.assert_numpy_array_equal(res, exp)
+ self.assert_index_equal(res, exp)
with tm.assert_produces_warning(FutureWarning):
res = pd.Categorical([1, 2, 3, np.nan], levels=[1, 2, 3])
- self.assert_numpy_array_equal(res.categories, exp)
+ self.assert_index_equal(res.categories, exp)
def test_removed_names_produces_warning(self):
@@ -1587,14 +1575,18 @@ def test_removed_names_produces_warning(self):
def test_datetime_categorical_comparison(self):
dt_cat = pd.Categorical(
pd.date_range('2014-01-01', periods=3), ordered=True)
- self.assert_numpy_array_equal(dt_cat > dt_cat[0], [False, True, True])
- self.assert_numpy_array_equal(dt_cat[0] < dt_cat, [False, True, True])
+ self.assert_numpy_array_equal(dt_cat > dt_cat[0],
+ np.array([False, True, True]))
+ self.assert_numpy_array_equal(dt_cat[0] < dt_cat,
+ np.array([False, True, True]))
def test_reflected_comparison_with_scalars(self):
# GH8658
cat = pd.Categorical([1, 2, 3], ordered=True)
- self.assert_numpy_array_equal(cat > cat[0], [False, True, True])
- self.assert_numpy_array_equal(cat[0] < cat, [False, True, True])
+ self.assert_numpy_array_equal(cat > cat[0],
+ np.array([False, True, True]))
+ self.assert_numpy_array_equal(cat[0] < cat,
+ np.array([False, True, True]))
def test_comparison_with_unknown_scalars(self):
# https://github.com/pydata/pandas/issues/9836#issuecomment-92123057
@@ -1607,8 +1599,10 @@ def test_comparison_with_unknown_scalars(self):
self.assertRaises(TypeError, lambda: 4 < cat)
self.assertRaises(TypeError, lambda: 4 > cat)
- self.assert_numpy_array_equal(cat == 4, [False, False, False])
- self.assert_numpy_array_equal(cat != 4, [True, True, True])
+ self.assert_numpy_array_equal(cat == 4,
+ np.array([False, False, False]))
+ self.assert_numpy_array_equal(cat != 4,
+ np.array([True, True, True]))
def test_map(self):
c = pd.Categorical(list('ABABC'), categories=list('CBA'),
@@ -1935,8 +1929,7 @@ def test_nan_handling(self):
# Nans are represented as -1 in labels
s = Series(Categorical(["a", "b", np.nan, "a"]))
- self.assert_numpy_array_equal(s.cat.categories,
- np.array(["a", "b"], dtype=np.object_))
+ self.assert_index_equal(s.cat.categories, Index(["a", "b"]))
self.assert_numpy_array_equal(s.values.codes,
np.array([0, 1, -1, 0], dtype=np.int8))
@@ -1946,8 +1939,8 @@ def test_nan_handling(self):
s2 = Series(Categorical(["a", "b", np.nan, "a"],
categories=["a", "b", np.nan]))
- exp_cat = np.array(["a", "b", np.nan], dtype=np.object_)
- self.assert_numpy_array_equal(s2.cat.categories, exp_cat)
+ exp_cat = Index(["a", "b", np.nan])
+ self.assert_index_equal(s2.cat.categories, exp_cat)
self.assert_numpy_array_equal(s2.values.codes,
np.array([0, 1, 2, 0], dtype=np.int8))
@@ -1956,24 +1949,26 @@ def test_nan_handling(self):
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
s3.cat.categories = ["a", "b", np.nan]
- exp_cat = np.array(["a", "b", np.nan], dtype=np.object_)
- self.assert_numpy_array_equal(s3.cat.categories, exp_cat)
+ exp_cat = Index(["a", "b", np.nan])
+ self.assert_index_equal(s3.cat.categories, exp_cat)
self.assert_numpy_array_equal(s3.values.codes,
np.array([0, 1, 2, 0], dtype=np.int8))
def test_cat_accessor(self):
s = Series(Categorical(["a", "b", np.nan, "a"]))
- self.assert_numpy_array_equal(s.cat.categories, np.array(["a", "b"]))
+ self.assert_index_equal(s.cat.categories, Index(["a", "b"]))
self.assertEqual(s.cat.ordered, False)
exp = Categorical(["a", "b", np.nan, "a"], categories=["b", "a"])
s.cat.set_categories(["b", "a"], inplace=True)
- self.assertTrue(s.values.equals(exp))
+ tm.assert_categorical_equal(s.values, exp)
+
res = s.cat.set_categories(["b", "a"])
- self.assertTrue(res.values.equals(exp))
+ tm.assert_categorical_equal(res.values, exp)
+
exp = Categorical(["a", "b", np.nan, "a"], categories=["b", "a"])
s[:] = "a"
s = s.cat.remove_unused_categories()
- self.assert_numpy_array_equal(s.cat.categories, np.array(["a"]))
+ self.assert_index_equal(s.cat.categories, Index(["a"]))
def test_sequence_like(self):
@@ -2015,11 +2010,11 @@ def test_series_delegations(self):
# and the methods '.set_categories()' 'drop_unused_categories()' to the
# categorical
s = Series(Categorical(["a", "b", "c", "a"], ordered=True))
- exp_categories = np.array(["a", "b", "c"])
- self.assert_numpy_array_equal(s.cat.categories, exp_categories)
+ exp_categories = Index(["a", "b", "c"])
+ tm.assert_index_equal(s.cat.categories, exp_categories)
s.cat.categories = [1, 2, 3]
- exp_categories = np.array([1, 2, 3])
- self.assert_numpy_array_equal(s.cat.categories, exp_categories)
+ exp_categories = Index([1, 2, 3])
+ self.assert_index_equal(s.cat.categories, exp_categories)
exp_codes = Series([0, 1, 2, 0], dtype='int8')
tm.assert_series_equal(s.cat.codes, exp_codes)
@@ -2032,20 +2027,20 @@ def test_series_delegations(self):
# reorder
s = Series(Categorical(["a", "b", "c", "a"], ordered=True))
- exp_categories = np.array(["c", "b", "a"])
+ exp_categories = Index(["c", "b", "a"])
exp_values = np.array(["a", "b", "c", "a"], dtype=np.object_)
s = s.cat.set_categories(["c", "b", "a"])
- self.assert_numpy_array_equal(s.cat.categories, exp_categories)
+ tm.assert_index_equal(s.cat.categories, exp_categories)
self.assert_numpy_array_equal(s.values.__array__(), exp_values)
self.assert_numpy_array_equal(s.__array__(), exp_values)
# remove unused categories
s = Series(Categorical(["a", "b", "b", "a"], categories=["a", "b", "c"
]))
- exp_categories = np.array(["a", "b"], dtype=object)
+ exp_categories = Index(["a", "b"])
exp_values = np.array(["a", "b", "b", "a"], dtype=np.object_)
s = s.cat.remove_unused_categories()
- self.assert_numpy_array_equal(s.cat.categories, exp_categories)
+ self.assert_index_equal(s.cat.categories, exp_categories)
self.assert_numpy_array_equal(s.values.__array__(), exp_values)
self.assert_numpy_array_equal(s.__array__(), exp_values)
@@ -2092,11 +2087,11 @@ def test_assignment_to_dataframe(self):
result1 = df['D']
result2 = df['E']
- self.assertTrue(result1._data._block.values.equals(d))
+ self.assert_categorical_equal(result1._data._block.values, d)
# sorting
s.name = 'E'
- self.assertTrue(result2.sort_index().equals(s.sort_index()))
+ self.assert_series_equal(result2.sort_index(), s.sort_index())
cat = pd.Categorical([1, 2, 3, 10], categories=[1, 2, 3, 4, 10])
df = pd.DataFrame(pd.Series(cat))
@@ -3152,7 +3147,7 @@ def test_sort_values(self):
res = df.sort_values(by=["sort"], ascending=False)
exp = df.sort_values(by=["string"], ascending=True)
- self.assert_numpy_array_equal(res["values"], exp["values"])
+ self.assert_series_equal(res["values"], exp["values"])
self.assertEqual(res["sort"].dtype, "category")
self.assertEqual(res["unsort"].dtype, "category")
@@ -3906,15 +3901,15 @@ def f():
df1 = df[0:3]
df2 = df[3:]
- self.assert_numpy_array_equal(df['grade'].cat.categories,
- df1['grade'].cat.categories)
- self.assert_numpy_array_equal(df['grade'].cat.categories,
- df2['grade'].cat.categories)
+ self.assert_index_equal(df['grade'].cat.categories,
+ df1['grade'].cat.categories)
+ self.assert_index_equal(df['grade'].cat.categories,
+ df2['grade'].cat.categories)
dfx = pd.concat([df1, df2])
dfx['grade'].cat.categories
- self.assert_numpy_array_equal(df['grade'].cat.categories,
- dfx['grade'].cat.categories)
+ self.assert_index_equal(df['grade'].cat.categories,
+ dfx['grade'].cat.categories)
def test_concat_preserve(self):
diff --git a/pandas/tests/test_expressions.py b/pandas/tests/test_expressions.py
index b6ed5dc68f905..cc0972937b8a2 100644
--- a/pandas/tests/test_expressions.py
+++ b/pandas/tests/test_expressions.py
@@ -287,7 +287,12 @@ def testit():
use_numexpr=True)
expected = expr.evaluate(op, op_str, f, f,
use_numexpr=False)
- tm.assert_numpy_array_equal(result, expected.values)
+
+ if isinstance(result, DataFrame):
+ tm.assert_frame_equal(result, expected)
+ else:
+ tm.assert_numpy_array_equal(result,
+ expected.values)
result = expr._can_use_numexpr(op, op_str, f2, f2,
'evaluate')
@@ -325,7 +330,10 @@ def testit():
use_numexpr=True)
expected = expr.evaluate(op, op_str, f11, f12,
use_numexpr=False)
- tm.assert_numpy_array_equal(result, expected.values)
+ if isinstance(result, DataFrame):
+ tm.assert_frame_equal(result, expected)
+ else:
+ tm.assert_numpy_array_equal(result, expected.values)
result = expr._can_use_numexpr(op, op_str, f21, f22,
'evaluate')
diff --git a/pandas/tests/test_generic.py b/pandas/tests/test_generic.py
index 36962a37ec898..83e1a17fc8b0c 100644
--- a/pandas/tests/test_generic.py
+++ b/pandas/tests/test_generic.py
@@ -1289,7 +1289,7 @@ def test_tz_convert_and_localize(self):
df1 = DataFrame(np.ones(5), index=l0)
df1 = getattr(df1, fn)('US/Pacific')
- self.assertTrue(df1.index.equals(l0_expected))
+ self.assert_index_equal(df1.index, l0_expected)
# MultiIndex
# GH7846
@@ -1297,14 +1297,14 @@ def test_tz_convert_and_localize(self):
df3 = getattr(df2, fn)('US/Pacific', level=0)
self.assertFalse(df3.index.levels[0].equals(l0))
- self.assertTrue(df3.index.levels[0].equals(l0_expected))
- self.assertTrue(df3.index.levels[1].equals(l1))
+ self.assert_index_equal(df3.index.levels[0], l0_expected)
+ self.assert_index_equal(df3.index.levels[1], l1)
self.assertFalse(df3.index.levels[1].equals(l1_expected))
df3 = getattr(df2, fn)('US/Pacific', level=1)
- self.assertTrue(df3.index.levels[0].equals(l0))
+ self.assert_index_equal(df3.index.levels[0], l0)
self.assertFalse(df3.index.levels[0].equals(l0_expected))
- self.assertTrue(df3.index.levels[1].equals(l1_expected))
+ self.assert_index_equal(df3.index.levels[1], l1_expected)
self.assertFalse(df3.index.levels[1].equals(l1))
df4 = DataFrame(np.ones(5),
@@ -1313,9 +1313,9 @@ def test_tz_convert_and_localize(self):
# TODO: untested
df5 = getattr(df4, fn)('US/Pacific', level=1) # noqa
- self.assertTrue(df3.index.levels[0].equals(l0))
+ self.assert_index_equal(df3.index.levels[0], l0)
self.assertFalse(df3.index.levels[0].equals(l0_expected))
- self.assertTrue(df3.index.levels[1].equals(l1_expected))
+ self.assert_index_equal(df3.index.levels[1], l1_expected)
self.assertFalse(df3.index.levels[1].equals(l1))
# Bad Inputs
diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py
index b59d6ac0027dd..b09185c19bffb 100644
--- a/pandas/tests/test_graphics.py
+++ b/pandas/tests/test_graphics.py
@@ -706,14 +706,12 @@ def test_bar_log(self):
expected = np.hstack((1.0e-04, expected, 1.0e+01))
ax = Series([0.1, 0.01, 0.001]).plot(log=True, kind='bar')
- tm.assert_numpy_array_equal(ax.get_ylim(),
- (0.001, 0.10000000000000001))
+ self.assertEqual(ax.get_ylim(), (0.001, 0.10000000000000001))
tm.assert_numpy_array_equal(ax.yaxis.get_ticklocs(), expected)
tm.close()
ax = Series([0.1, 0.01, 0.001]).plot(log=True, kind='barh')
- tm.assert_numpy_array_equal(ax.get_xlim(),
- (0.001, 0.10000000000000001))
+ self.assertEqual(ax.get_xlim(), (0.001, 0.10000000000000001))
tm.assert_numpy_array_equal(ax.xaxis.get_ticklocs(), expected)
@slow
@@ -2205,11 +2203,11 @@ def test_scatter_colors(self):
ax = df.plot.scatter(x='a', y='b', c='c')
tm.assert_numpy_array_equal(ax.collections[0].get_facecolor()[0],
- (0, 0, 1, 1))
+ np.array([0, 0, 1, 1], dtype=np.float64))
ax = df.plot.scatter(x='a', y='b', color='white')
tm.assert_numpy_array_equal(ax.collections[0].get_facecolor()[0],
- (1, 1, 1, 1))
+ np.array([1, 1, 1, 1], dtype=np.float64))
@slow
def test_plot_bar(self):
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index 1996d132e01ba..6659e6b106a67 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -1088,13 +1088,13 @@ def test_transform_broadcast(self):
grouped = self.ts.groupby(lambda x: x.month)
result = grouped.transform(np.mean)
- self.assertTrue(result.index.equals(self.ts.index))
+ self.assert_index_equal(result.index, self.ts.index)
for _, gp in grouped:
assert_fp_equal(result.reindex(gp.index), gp.mean())
grouped = self.tsframe.groupby(lambda x: x.month)
result = grouped.transform(np.mean)
- self.assertTrue(result.index.equals(self.tsframe.index))
+ self.assert_index_equal(result.index, self.tsframe.index)
for _, gp in grouped:
agged = gp.mean()
res = result.reindex(gp.index)
@@ -1105,8 +1105,8 @@ def test_transform_broadcast(self):
grouped = self.tsframe.groupby({'A': 0, 'B': 0, 'C': 1, 'D': 1},
axis=1)
result = grouped.transform(np.mean)
- self.assertTrue(result.index.equals(self.tsframe.index))
- self.assertTrue(result.columns.equals(self.tsframe.columns))
+ self.assert_index_equal(result.index, self.tsframe.index)
+ self.assert_index_equal(result.columns, self.tsframe.columns)
for _, gp in grouped:
agged = gp.mean(1)
res = result.reindex(columns=gp.columns)
@@ -2137,7 +2137,7 @@ def test_groupby_multiple_key(self):
lambda x: x.day], axis=1)
agged = grouped.agg(lambda x: x.sum())
- self.assertTrue(agged.index.equals(df.columns))
+ self.assert_index_equal(agged.index, df.columns)
assert_almost_equal(df.T.values, agged.values)
agged = grouped.agg(lambda x: x.sum())
@@ -2549,7 +2549,7 @@ def f(piece):
result = grouped.apply(f)
tm.assertIsInstance(result, DataFrame)
- self.assertTrue(result.index.equals(ts.index))
+ self.assert_index_equal(result.index, ts.index)
def test_apply_series_yield_constant(self):
result = self.df.groupby(['A', 'B'])['C'].apply(len)
@@ -2559,7 +2559,7 @@ def test_apply_frame_to_series(self):
grouped = self.df.groupby(['A', 'B'])
result = grouped.apply(len)
expected = grouped.count()['C']
- self.assertTrue(result.index.equals(expected.index))
+ self.assert_index_equal(result.index, expected.index)
self.assert_numpy_array_equal(result.values, expected.values)
def test_apply_frame_concat_series(self):
@@ -2673,26 +2673,26 @@ def test_groupby_with_hier_columns(self):
df = DataFrame(np.random.randn(8, 4), index=index, columns=columns)
result = df.groupby(level=0).mean()
- self.assertTrue(result.columns.equals(columns))
+ self.assert_index_equal(result.columns, columns)
result = df.groupby(level=0, axis=1).mean()
- self.assertTrue(result.index.equals(df.index))
+ self.assert_index_equal(result.index, df.index)
result = df.groupby(level=0).agg(np.mean)
- self.assertTrue(result.columns.equals(columns))
+ self.assert_index_equal(result.columns, columns)
result = df.groupby(level=0).apply(lambda x: x.mean())
- self.assertTrue(result.columns.equals(columns))
+ self.assert_index_equal(result.columns, columns)
result = df.groupby(level=0, axis=1).agg(lambda x: x.mean(1))
- self.assertTrue(result.columns.equals(Index(['A', 'B'])))
- self.assertTrue(result.index.equals(df.index))
+ self.assert_index_equal(result.columns, Index(['A', 'B']))
+ self.assert_index_equal(result.index, df.index)
# add a nuisance column
sorted_columns, _ = columns.sortlevel(0)
df['A', 'foo'] = 'bar'
result = df.groupby(level=0).mean()
- self.assertTrue(result.columns.equals(df.columns[:-1]))
+ self.assert_index_equal(result.columns, df.columns[:-1])
def test_pass_args_kwargs(self):
from numpy import percentile
@@ -3413,18 +3413,18 @@ def test_panel_groupby(self):
tm.assert_panel_equal(agged, agged2)
- self.assert_numpy_array_equal(agged.items, [0, 1])
+ self.assert_index_equal(agged.items, Index([0, 1]))
grouped = self.panel.groupby(lambda x: x.month, axis='major')
agged = grouped.mean()
- self.assert_numpy_array_equal(agged.major_axis, sorted(list(set(
- self.panel.major_axis.month))))
+ exp = Index(sorted(list(set(self.panel.major_axis.month))))
+ self.assert_index_equal(agged.major_axis, exp)
grouped = self.panel.groupby({'A': 0, 'B': 0, 'C': 1, 'D': 1},
axis='minor')
agged = grouped.mean()
- self.assert_numpy_array_equal(agged.minor_axis, [0, 1])
+ self.assert_index_equal(agged.minor_axis, Index([0, 1]))
def test_numpy_groupby(self):
from pandas.core.groupby import numpy_groupby
@@ -3450,7 +3450,7 @@ def test_groupby_2d_malformed(self):
d['label'] = ['l1', 'l2']
tmp = d.groupby(['group']).mean()
res_values = np.array([[0, 1], [0, 1]], dtype=np.int64)
- self.assert_numpy_array_equal(tmp.columns, ['zeros', 'ones'])
+ self.assert_index_equal(tmp.columns, Index(['zeros', 'ones']))
self.assert_numpy_array_equal(tmp.values, res_values)
def test_int32_overflow(self):
@@ -3489,10 +3489,10 @@ def test_int64_overflow(self):
right = rg.sum()['values']
exp_index, _ = left.index.sortlevel(0)
- self.assertTrue(left.index.equals(exp_index))
+ self.assert_index_equal(left.index, exp_index)
exp_index, _ = right.index.sortlevel(0)
- self.assertTrue(right.index.equals(exp_index))
+ self.assert_index_equal(right.index, exp_index)
tups = list(map(tuple, df[['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H'
]].values))
@@ -3720,9 +3720,9 @@ def test_agg_multiple_functions_maintain_order(self):
# GH #610
funcs = [('mean', np.mean), ('max', np.max), ('min', np.min)]
result = self.df.groupby('A')['C'].agg(funcs)
- exp_cols = ['mean', 'max', 'min']
+ exp_cols = Index(['mean', 'max', 'min'])
- self.assert_numpy_array_equal(result.columns, exp_cols)
+ self.assert_index_equal(result.columns, exp_cols)
def test_multiple_functions_tuples_and_non_tuples(self):
# #1359
@@ -4275,10 +4275,10 @@ def test_multiindex_columns_empty_level(self):
df = DataFrame([[long(1), 'A']], columns=midx)
grouped = df.groupby('to filter').groups
- self.assert_numpy_array_equal(grouped['A'], [0])
+ self.assertEqual(grouped['A'], [0])
grouped = df.groupby([('to filter', '')]).groups
- self.assert_numpy_array_equal(grouped['A'], [0])
+ self.assertEqual(grouped['A'], [0])
df = DataFrame([[long(1), 'A'], [long(2), 'B']], columns=midx)
@@ -5853,25 +5853,23 @@ def test_lexsort_indexer(self):
keys = [[nan] * 5 + list(range(100)) + [nan] * 5]
# orders=True, na_position='last'
result = _lexsort_indexer(keys, orders=True, na_position='last')
- expected = list(range(5, 105)) + list(range(5)) + list(range(105, 110))
- tm.assert_numpy_array_equal(result, expected)
+ exp = list(range(5, 105)) + list(range(5)) + list(range(105, 110))
+ tm.assert_numpy_array_equal(result, np.array(exp))
# orders=True, na_position='first'
result = _lexsort_indexer(keys, orders=True, na_position='first')
- expected = list(range(5)) + list(range(105, 110)) + list(range(5, 105))
- tm.assert_numpy_array_equal(result, expected)
+ exp = list(range(5)) + list(range(105, 110)) + list(range(5, 105))
+ tm.assert_numpy_array_equal(result, np.array(exp))
# orders=False, na_position='last'
result = _lexsort_indexer(keys, orders=False, na_position='last')
- expected = list(range(104, 4, -1)) + list(range(5)) + list(range(105,
- 110))
- tm.assert_numpy_array_equal(result, expected)
+ exp = list(range(104, 4, -1)) + list(range(5)) + list(range(105, 110))
+ tm.assert_numpy_array_equal(result, np.array(exp))
# orders=False, na_position='first'
result = _lexsort_indexer(keys, orders=False, na_position='first')
- expected = list(range(5)) + list(range(105, 110)) + list(range(104, 4,
- -1))
- tm.assert_numpy_array_equal(result, expected)
+ exp = list(range(5)) + list(range(105, 110)) + list(range(104, 4, -1))
+ tm.assert_numpy_array_equal(result, np.array(exp))
def test_nargsort(self):
# np.argsort(items) places NaNs last
@@ -5897,54 +5895,50 @@ def test_nargsort(self):
# mergesort, ascending=True, na_position='last'
result = _nargsort(items, kind='mergesort', ascending=True,
na_position='last')
- expected = list(range(5, 105)) + list(range(5)) + list(range(105, 110))
- tm.assert_numpy_array_equal(result, expected)
+ exp = list(range(5, 105)) + list(range(5)) + list(range(105, 110))
+ tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.int64))
# mergesort, ascending=True, na_position='first'
result = _nargsort(items, kind='mergesort', ascending=True,
na_position='first')
- expected = list(range(5)) + list(range(105, 110)) + list(range(5, 105))
- tm.assert_numpy_array_equal(result, expected)
+ exp = list(range(5)) + list(range(105, 110)) + list(range(5, 105))
+ tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.int64))
# mergesort, ascending=False, na_position='last'
result = _nargsort(items, kind='mergesort', ascending=False,
na_position='last')
- expected = list(range(104, 4, -1)) + list(range(5)) + list(range(105,
- 110))
- tm.assert_numpy_array_equal(result, expected)
+ exp = list(range(104, 4, -1)) + list(range(5)) + list(range(105, 110))
+ tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.int64))
# mergesort, ascending=False, na_position='first'
result = _nargsort(items, kind='mergesort', ascending=False,
na_position='first')
- expected = list(range(5)) + list(range(105, 110)) + list(range(104, 4,
- -1))
- tm.assert_numpy_array_equal(result, expected)
+ exp = list(range(5)) + list(range(105, 110)) + list(range(104, 4, -1))
+ tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.int64))
# mergesort, ascending=True, na_position='last'
result = _nargsort(items2, kind='mergesort', ascending=True,
na_position='last')
- expected = list(range(5, 105)) + list(range(5)) + list(range(105, 110))
- tm.assert_numpy_array_equal(result, expected)
+ exp = list(range(5, 105)) + list(range(5)) + list(range(105, 110))
+ tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.int64))
# mergesort, ascending=True, na_position='first'
result = _nargsort(items2, kind='mergesort', ascending=True,
na_position='first')
- expected = list(range(5)) + list(range(105, 110)) + list(range(5, 105))
- tm.assert_numpy_array_equal(result, expected)
+ exp = list(range(5)) + list(range(105, 110)) + list(range(5, 105))
+ tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.int64))
# mergesort, ascending=False, na_position='last'
result = _nargsort(items2, kind='mergesort', ascending=False,
na_position='last')
- expected = list(range(104, 4, -1)) + list(range(5)) + list(range(105,
- 110))
- tm.assert_numpy_array_equal(result, expected)
+ exp = list(range(104, 4, -1)) + list(range(5)) + list(range(105, 110))
+ tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.int64))
# mergesort, ascending=False, na_position='first'
result = _nargsort(items2, kind='mergesort', ascending=False,
na_position='first')
- expected = list(range(5)) + list(range(105, 110)) + list(range(104, 4,
- -1))
- tm.assert_numpy_array_equal(result, expected)
+ exp = list(range(5)) + list(range(105, 110)) + list(range(104, 4, -1))
+ tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.int64))
def test_datetime_count(self):
df = DataFrame({'a': [1, 2, 3] * 2,
diff --git a/pandas/tests/test_internals.py b/pandas/tests/test_internals.py
index bf9574f48913a..6a97f195abba7 100644
--- a/pandas/tests/test_internals.py
+++ b/pandas/tests/test_internals.py
@@ -17,15 +17,19 @@
import pandas.core.algorithms as algos
import pandas.util.testing as tm
import pandas as pd
+from pandas import lib
from pandas.util.testing import (assert_almost_equal, assert_frame_equal,
randn, assert_series_equal)
from pandas.compat import zip, u
def assert_block_equal(left, right):
- assert_almost_equal(left.values, right.values)
+ tm.assert_numpy_array_equal(left.values, right.values)
assert (left.dtype == right.dtype)
- assert_almost_equal(left.mgr_locs, right.mgr_locs)
+ tm.assertIsInstance(left.mgr_locs, lib.BlockPlacement)
+ tm.assertIsInstance(right.mgr_locs, lib.BlockPlacement)
+ tm.assert_numpy_array_equal(left.mgr_locs.as_array,
+ right.mgr_locs.as_array)
def get_numeric_mat(shape):
@@ -207,7 +211,9 @@ def _check(blk):
_check(self.bool_block)
def test_mgr_locs(self):
- assert_almost_equal(self.fblock.mgr_locs, [0, 2, 4])
+ tm.assertIsInstance(self.fblock.mgr_locs, lib.BlockPlacement)
+ tm.assert_numpy_array_equal(self.fblock.mgr_locs.as_array,
+ np.array([0, 2, 4], dtype=np.int64))
def test_attrs(self):
self.assertEqual(self.fblock.shape, self.fblock.values.shape)
@@ -223,9 +229,10 @@ def test_merge(self):
ablock = make_block(avals, ref_cols.get_indexer(['e', 'b']))
bblock = make_block(bvals, ref_cols.get_indexer(['a', 'd']))
merged = ablock.merge(bblock)
- assert_almost_equal(merged.mgr_locs, [0, 1, 2, 3])
- assert_almost_equal(merged.values[[0, 2]], avals)
- assert_almost_equal(merged.values[[1, 3]], bvals)
+ tm.assert_numpy_array_equal(merged.mgr_locs.as_array,
+ np.array([0, 1, 2, 3], dtype=np.int64))
+ tm.assert_numpy_array_equal(merged.values[[0, 2]], np.array(avals))
+ tm.assert_numpy_array_equal(merged.values[[1, 3]], np.array(bvals))
# TODO: merge with mixed type?
@@ -246,17 +253,22 @@ def test_insert(self):
def test_delete(self):
newb = self.fblock.copy()
newb.delete(0)
- assert_almost_equal(newb.mgr_locs, [2, 4])
+ tm.assertIsInstance(newb.mgr_locs, lib.BlockPlacement)
+ tm.assert_numpy_array_equal(newb.mgr_locs.as_array,
+ np.array([2, 4], dtype=np.int64))
self.assertTrue((newb.values[0] == 1).all())
newb = self.fblock.copy()
newb.delete(1)
- assert_almost_equal(newb.mgr_locs, [0, 4])
+ tm.assertIsInstance(newb.mgr_locs, lib.BlockPlacement)
+ tm.assert_numpy_array_equal(newb.mgr_locs.as_array,
+ np.array([0, 4], dtype=np.int64))
self.assertTrue((newb.values[1] == 2).all())
newb = self.fblock.copy()
newb.delete(2)
- assert_almost_equal(newb.mgr_locs, [0, 2])
+ tm.assert_numpy_array_equal(newb.mgr_locs.as_array,
+ np.array([0, 2], dtype=np.int64))
self.assertTrue((newb.values[1] == 1).all())
newb = self.fblock.copy()
@@ -399,9 +411,9 @@ def test_get_scalar(self):
for i, index in enumerate(self.mgr.axes[1]):
res = self.mgr.get_scalar((item, index))
exp = self.mgr.get(item, fastpath=False)[i]
- assert_almost_equal(res, exp)
+ self.assertEqual(res, exp)
exp = self.mgr.get(item).internal_values()[i]
- assert_almost_equal(res, exp)
+ self.assertEqual(res, exp)
def test_get(self):
cols = Index(list('abc'))
@@ -421,10 +433,14 @@ def test_set(self):
mgr.set('d', np.array(['foo'] * 3))
mgr.set('b', np.array(['bar'] * 3))
- assert_almost_equal(mgr.get('a').internal_values(), [0] * 3)
- assert_almost_equal(mgr.get('b').internal_values(), ['bar'] * 3)
- assert_almost_equal(mgr.get('c').internal_values(), [2] * 3)
- assert_almost_equal(mgr.get('d').internal_values(), ['foo'] * 3)
+ tm.assert_numpy_array_equal(mgr.get('a').internal_values(),
+ np.array([0] * 3))
+ tm.assert_numpy_array_equal(mgr.get('b').internal_values(),
+ np.array(['bar'] * 3, dtype=np.object_))
+ tm.assert_numpy_array_equal(mgr.get('c').internal_values(),
+ np.array([2] * 3))
+ tm.assert_numpy_array_equal(mgr.get('d').internal_values(),
+ np.array(['foo'] * 3, dtype=np.object_))
def test_insert(self):
self.mgr.insert(0, 'inserted', np.arange(N))
@@ -689,8 +705,9 @@ def test_consolidate_ordering_issues(self):
self.assertEqual(cons.nblocks, 4)
cons = self.mgr.consolidate().get_numeric_data()
self.assertEqual(cons.nblocks, 1)
- assert_almost_equal(cons.blocks[0].mgr_locs,
- np.arange(len(cons.items)))
+ tm.assertIsInstance(cons.blocks[0].mgr_locs, lib.BlockPlacement)
+ tm.assert_numpy_array_equal(cons.blocks[0].mgr_locs.as_array,
+ np.arange(len(cons.items), dtype=np.int64))
def test_reindex_index(self):
pass
@@ -786,18 +803,18 @@ def test_get_bool_data(self):
bools.get('bool').internal_values())
bools.set('bool', np.array([True, False, True]))
- assert_almost_equal(
- mgr.get('bool', fastpath=False), [True, False, True])
- assert_almost_equal(
- mgr.get('bool').internal_values(), [True, False, True])
+ tm.assert_numpy_array_equal(mgr.get('bool', fastpath=False),
+ np.array([True, False, True]))
+ tm.assert_numpy_array_equal(mgr.get('bool').internal_values(),
+ np.array([True, False, True]))
# Check sharing
bools2 = mgr.get_bool_data(copy=True)
bools2.set('bool', np.array([False, True, False]))
- assert_almost_equal(
- mgr.get('bool', fastpath=False), [True, False, True])
- assert_almost_equal(
- mgr.get('bool').internal_values(), [True, False, True])
+ tm.assert_numpy_array_equal(mgr.get('bool', fastpath=False),
+ np.array([True, False, True]))
+ tm.assert_numpy_array_equal(mgr.get('bool').internal_values(),
+ np.array([True, False, True]))
def test_unicode_repr_doesnt_raise(self):
repr(create_mgr(u('b,\u05d0: object')))
@@ -892,8 +909,7 @@ def assert_slice_ok(mgr, axis, slobj):
mat_slobj = (slice(None), ) * axis + (slobj, )
tm.assert_numpy_array_equal(mat[mat_slobj], sliced.as_matrix(),
check_dtype=False)
- tm.assert_numpy_array_equal(mgr.axes[axis][slobj],
- sliced.axes[axis])
+ tm.assert_index_equal(mgr.axes[axis][slobj], sliced.axes[axis])
for mgr in self.MANAGERS:
for ax in range(mgr.ndim):
@@ -931,8 +947,8 @@ def assert_take_ok(mgr, axis, indexer):
taken = mgr.take(indexer, axis)
tm.assert_numpy_array_equal(np.take(mat, indexer, axis),
taken.as_matrix(), check_dtype=False)
- tm.assert_numpy_array_equal(mgr.axes[axis].take(indexer),
- taken.axes[axis])
+ tm.assert_index_equal(mgr.axes[axis].take(indexer),
+ taken.axes[axis])
for mgr in self.MANAGERS:
for ax in range(mgr.ndim):
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index 63a8b49ab4b00..c4ccef13f2844 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -87,19 +87,19 @@ def test_append_index(self):
(1.2, datetime.datetime(2011, 1, 2, tzinfo=tz)),
(1.3, datetime.datetime(2011, 1, 3, tzinfo=tz))]
expected = Index([1.1, 1.2, 1.3] + expected_tuples)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
result = midx_lv2.append(idx1)
expected = Index(expected_tuples + [1.1, 1.2, 1.3])
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
result = midx_lv2.append(midx_lv2)
- expected = MultiIndex.from_arrays([idx1.append(idx1), idx2.append(idx2)
- ])
- self.assertTrue(result.equals(expected))
+ expected = MultiIndex.from_arrays([idx1.append(idx1),
+ idx2.append(idx2)])
+ self.assert_index_equal(result, expected)
result = midx_lv2.append(midx_lv3)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
result = midx_lv3.append(midx_lv2)
expected = Index._simple_new(
@@ -107,7 +107,7 @@ def test_append_index(self):
(1.2, datetime.datetime(2011, 1, 2, tzinfo=tz), 'B'),
(1.3, datetime.datetime(2011, 1, 3, tzinfo=tz), 'C')] +
expected_tuples), None)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
def test_dataframe_constructor(self):
multi = DataFrame(np.random.randn(4, 4),
@@ -966,9 +966,7 @@ def check(left, right):
assert_series_equal(left, right)
self.assertFalse(left.index.is_unique)
li, ri = left.index, right.index
- for i in range(ri.nlevels):
- tm.assert_numpy_array_equal(li.levels[i], ri.levels[i])
- tm.assert_numpy_array_equal(li.labels[i], ri.labels[i])
+ tm.assert_index_equal(li, ri)
df = DataFrame(np.arange(12).reshape(4, 3),
index=list('abab'),
@@ -1542,8 +1540,8 @@ def aggf(x):
# for good measure, groupby detail
level_index = frame._get_axis(axis).levels[level]
- self.assertTrue(leftside._get_axis(axis).equals(level_index))
- self.assertTrue(rightside._get_axis(axis).equals(level_index))
+ self.assert_index_equal(leftside._get_axis(axis), level_index)
+ self.assert_index_equal(rightside._get_axis(axis), level_index)
assert_frame_equal(leftside, rightside)
@@ -2211,12 +2209,11 @@ def test_datetimeindex(self):
tz='US/Eastern')
idx = MultiIndex.from_arrays([idx1, idx2])
- expected1 = pd.DatetimeIndex(
- ['2013-04-01 9:00', '2013-04-02 9:00', '2013-04-03 9:00'
- ], tz='Asia/Tokyo')
+ expected1 = pd.DatetimeIndex(['2013-04-01 9:00', '2013-04-02 9:00',
+ '2013-04-03 9:00'], tz='Asia/Tokyo')
- self.assertTrue(idx.levels[0].equals(expected1))
- self.assertTrue(idx.levels[1].equals(idx2))
+ self.assert_index_equal(idx.levels[0], expected1)
+ self.assert_index_equal(idx.levels[1], idx2)
# from datetime combos
# GH 7888
@@ -2256,18 +2253,20 @@ def test_set_index_datetime(self):
df.index = pd.to_datetime(df.pop('datetime'), utc=True)
df.index = df.index.tz_localize('UTC').tz_convert('US/Pacific')
- expected = pd.DatetimeIndex(
- ['2011-07-19 07:00:00', '2011-07-19 08:00:00',
- '2011-07-19 09:00:00'])
+ expected = pd.DatetimeIndex(['2011-07-19 07:00:00',
+ '2011-07-19 08:00:00',
+ '2011-07-19 09:00:00'], name='datetime')
expected = expected.tz_localize('UTC').tz_convert('US/Pacific')
df = df.set_index('label', append=True)
- self.assertTrue(df.index.levels[0].equals(expected))
- self.assertTrue(df.index.levels[1].equals(pd.Index(['a', 'b'])))
+ self.assert_index_equal(df.index.levels[0], expected)
+ self.assert_index_equal(df.index.levels[1],
+ pd.Index(['a', 'b'], name='label'))
df = df.swaplevel(0, 1)
- self.assertTrue(df.index.levels[0].equals(pd.Index(['a', 'b'])))
- self.assertTrue(df.index.levels[1].equals(expected))
+ self.assert_index_equal(df.index.levels[0],
+ pd.Index(['a', 'b'], name='label'))
+ self.assert_index_equal(df.index.levels[1], expected)
df = DataFrame(np.random.random(6))
idx1 = pd.DatetimeIndex(['2011-07-19 07:00:00', '2011-07-19 08:00:00',
@@ -2287,17 +2286,17 @@ def test_set_index_datetime(self):
expected1 = pd.DatetimeIndex(['2011-07-19 07:00:00',
'2011-07-19 08:00:00',
'2011-07-19 09:00:00'], tz='US/Eastern')
- expected2 = pd.DatetimeIndex(
- ['2012-04-01 09:00', '2012-04-02 09:00'], tz='US/Eastern')
+ expected2 = pd.DatetimeIndex(['2012-04-01 09:00', '2012-04-02 09:00'],
+ tz='US/Eastern')
- self.assertTrue(df.index.levels[0].equals(expected1))
- self.assertTrue(df.index.levels[1].equals(expected2))
- self.assertTrue(df.index.levels[2].equals(idx3))
+ self.assert_index_equal(df.index.levels[0], expected1)
+ self.assert_index_equal(df.index.levels[1], expected2)
+ self.assert_index_equal(df.index.levels[2], idx3)
# GH 7092
- self.assertTrue(df.index.get_level_values(0).equals(idx1))
- self.assertTrue(df.index.get_level_values(1).equals(idx2))
- self.assertTrue(df.index.get_level_values(2).equals(idx3))
+ self.assert_index_equal(df.index.get_level_values(0), idx1)
+ self.assert_index_equal(df.index.get_level_values(1), idx2)
+ self.assert_index_equal(df.index.get_level_values(2), idx3)
def test_reset_index_datetime(self):
# GH 3950
@@ -2404,13 +2403,13 @@ def test_set_index_period(self):
expected1 = pd.period_range('2011-01-01', periods=3, freq='M')
expected2 = pd.period_range('2013-01-01 09:00', periods=2, freq='H')
- self.assertTrue(df.index.levels[0].equals(expected1))
- self.assertTrue(df.index.levels[1].equals(expected2))
- self.assertTrue(df.index.levels[2].equals(idx3))
+ self.assert_index_equal(df.index.levels[0], expected1)
+ self.assert_index_equal(df.index.levels[1], expected2)
+ self.assert_index_equal(df.index.levels[2], idx3)
- self.assertTrue(df.index.get_level_values(0).equals(idx1))
- self.assertTrue(df.index.get_level_values(1).equals(idx2))
- self.assertTrue(df.index.get_level_values(2).equals(idx3))
+ self.assert_index_equal(df.index.get_level_values(0), idx1)
+ self.assert_index_equal(df.index.get_level_values(1), idx2)
+ self.assert_index_equal(df.index.get_level_values(2), idx3)
def test_repeat(self):
# GH 9361
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index 7f8fb8fa424d1..e244a04127949 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -929,7 +929,7 @@ def test_axis(self):
samples = np.vstack([self.samples,
np.nan * np.ones(len(self.samples))])
skew = nanops.nanskew(samples, axis=1)
- tm.assert_almost_equal(skew, [self.actual_skew, np.nan])
+ tm.assert_almost_equal(skew, np.array([self.actual_skew, np.nan]))
def test_nans(self):
samples = np.hstack([self.samples, np.nan])
@@ -979,7 +979,7 @@ def test_axis(self):
samples = np.vstack([self.samples,
np.nan * np.ones(len(self.samples))])
kurt = nanops.nankurt(samples, axis=1)
- tm.assert_almost_equal(kurt, [self.actual_kurt, np.nan])
+ tm.assert_almost_equal(kurt, np.array([self.actual_kurt, np.nan]))
def test_nans(self):
samples = np.hstack([self.samples, np.nan])
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 87401f272adbd..7792a1f5d3509 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -1086,12 +1086,12 @@ def test_ctor_dict(self):
# TODO: unused?
wp3 = Panel.from_dict(d3) # noqa
- self.assertTrue(wp.major_axis.equals(self.panel.major_axis))
+ self.assert_index_equal(wp.major_axis, self.panel.major_axis)
assert_panel_equal(wp, wp2)
# intersect
wp = Panel.from_dict(d, intersect=True)
- self.assertTrue(wp.major_axis.equals(itemb.index[5:]))
+ self.assert_index_equal(wp.major_axis, itemb.index[5:])
# use constructor
assert_panel_equal(Panel(d), Panel.from_dict(d))
@@ -1123,7 +1123,7 @@ def test_constructor_dict_mixed(self):
data = dict((k, v.values) for k, v in self.panel.iteritems())
result = Panel(data)
exp_major = Index(np.arange(len(self.panel.major_axis)))
- self.assertTrue(result.major_axis.equals(exp_major))
+ self.assert_index_equal(result.major_axis, exp_major)
result = Panel(data, items=self.panel.items,
major_axis=self.panel.major_axis,
@@ -1213,8 +1213,8 @@ def test_conform(self):
df = self.panel['ItemA'][:-5].filter(items=['A', 'B'])
conformed = self.panel.conform(df)
- assert (conformed.index.equals(self.panel.major_axis))
- assert (conformed.columns.equals(self.panel.minor_axis))
+ tm.assert_index_equal(conformed.index, self.panel.major_axis)
+ tm.assert_index_equal(conformed.columns, self.panel.minor_axis)
def test_convert_objects(self):
@@ -2078,11 +2078,11 @@ def test_rename(self):
renamed = self.panel.rename_axis(mapper, axis=0)
exp = Index(['foo', 'bar', 'baz'])
- self.assertTrue(renamed.items.equals(exp))
+ self.assert_index_equal(renamed.items, exp)
renamed = self.panel.rename_axis(str.lower, axis=2)
exp = Index(['a', 'b', 'c', 'd'])
- self.assertTrue(renamed.minor_axis.equals(exp))
+ self.assert_index_equal(renamed.minor_axis, exp)
# don't copy
renamed_nocopy = self.panel.rename_axis(mapper, axis=0, copy=False)
@@ -2485,7 +2485,7 @@ def test_axis_dummies(self):
transformed = make_axis_dummies(self.panel, 'minor',
transform=mapping.get)
self.assertEqual(len(transformed.columns), 2)
- self.assert_numpy_array_equal(transformed.columns, ['one', 'two'])
+ self.assert_index_equal(transformed.columns, Index(['one', 'two']))
# TODO: test correctness
@@ -2578,10 +2578,10 @@ def _monotonic(arr):
def test_panel_index():
index = panelm.panel_index([1, 2, 3, 4], [1, 2, 3])
- expected = MultiIndex.from_arrays([np.tile(
- [1, 2, 3, 4], 3), np.repeat(
- [1, 2, 3], 4)])
- assert (index.equals(expected))
+ expected = MultiIndex.from_arrays([np.tile([1, 2, 3, 4], 3),
+ np.repeat([1, 2, 3], 4)],
+ names=['time', 'panel'])
+ tm.assert_index_equal(index, expected)
def test_import_warnings():
diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py
index e3e906d48ae98..607048df29faa 100644
--- a/pandas/tests/test_panel4d.py
+++ b/pandas/tests/test_panel4d.py
@@ -733,7 +733,7 @@ def test_constructor_dict_mixed(self):
data = dict((k, v.values) for k, v in self.panel4d.iteritems())
result = Panel4D(data)
exp_major = Index(np.arange(len(self.panel4d.major_axis)))
- self.assertTrue(result.major_axis.equals(exp_major))
+ self.assert_index_equal(result.major_axis, exp_major)
result = Panel4D(data,
labels=self.panel4d.labels,
@@ -799,9 +799,9 @@ def test_conform(self):
p = self.panel4d['l1'].filter(items=['ItemA', 'ItemB'])
conformed = self.panel4d.conform(p)
- assert(conformed.items.equals(self.panel4d.labels))
- assert(conformed.major_axis.equals(self.panel4d.major_axis))
- assert(conformed.minor_axis.equals(self.panel4d.minor_axis))
+ tm.assert_index_equal(conformed.items, self.panel4d.labels)
+ tm.assert_index_equal(conformed.major_axis, self.panel4d.major_axis)
+ tm.assert_index_equal(conformed.minor_axis, self.panel4d.minor_axis)
def test_reindex(self):
ref = self.panel4d['l2']
@@ -1085,11 +1085,11 @@ def test_rename(self):
renamed = self.panel4d.rename_axis(mapper, axis=0)
exp = Index(['foo', 'bar', 'baz'])
- self.assertTrue(renamed.labels.equals(exp))
+ self.assert_index_equal(renamed.labels, exp)
renamed = self.panel4d.rename_axis(str.lower, axis=3)
exp = Index(['a', 'b', 'c', 'd'])
- self.assertTrue(renamed.minor_axis.equals(exp))
+ self.assert_index_equal(renamed.minor_axis, exp)
# don't copy
renamed_nocopy = self.panel4d.rename_axis(mapper, axis=0, copy=False)
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index 423a288077c4d..3d1851966afd0 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -48,12 +48,12 @@ def test_iter(self):
# indices of each yielded Series should be equal to the index of
# the original Series
- tm.assert_numpy_array_equal(s.index, ds.index)
+ tm.assert_index_equal(s.index, ds.index)
for el in s:
# each element of the series is either a basestring/str or nan
- self.assertTrue(isinstance(el, compat.string_types) or isnull(
- el))
+ self.assertTrue(isinstance(el, compat.string_types) or
+ isnull(el))
# desired behavior is to iterate until everything would be nan on the
# next iter so make sure the last element of the iterator was 'l' in
@@ -95,8 +95,8 @@ def test_iter_object_try_string(self):
self.assertEqual(s, 'h')
def test_cat(self):
- one = ['a', 'a', 'b', 'b', 'c', NA]
- two = ['a', NA, 'b', 'd', 'foo', NA]
+ one = np.array(['a', 'a', 'b', 'b', 'c', NA], dtype=np.object_)
+ two = np.array(['a', NA, 'b', 'd', 'foo', NA], dtype=np.object_)
# single array
result = strings.str_cat(one)
@@ -121,21 +121,24 @@ def test_cat(self):
# Multiple arrays
result = strings.str_cat(one, [two], na_rep='NA')
- exp = ['aa', 'aNA', 'bb', 'bd', 'cfoo', 'NANA']
+ exp = np.array(['aa', 'aNA', 'bb', 'bd', 'cfoo', 'NANA'],
+ dtype=np.object_)
self.assert_numpy_array_equal(result, exp)
result = strings.str_cat(one, two)
- exp = ['aa', NA, 'bb', 'bd', 'cfoo', NA]
+ exp = np.array(['aa', NA, 'bb', 'bd', 'cfoo', NA], dtype=np.object_)
tm.assert_almost_equal(result, exp)
def test_count(self):
- values = ['foo', 'foofoo', NA, 'foooofooofommmfoo']
+ values = np.array(['foo', 'foofoo', NA, 'foooofooofommmfoo'],
+ dtype=np.object_)
result = strings.str_count(values, 'f[o]+')
- exp = Series([1, 2, NA, 4])
- tm.assert_almost_equal(result, exp)
+ exp = np.array([1, 2, NA, 4])
+ tm.assert_numpy_array_equal(result, exp)
result = Series(values).str.count('f[o]+')
+ exp = Series([1, 2, NA, 4])
tm.assertIsInstance(result, Series)
tm.assert_series_equal(result, exp)
@@ -163,61 +166,66 @@ def test_count(self):
tm.assert_series_equal(result, exp)
def test_contains(self):
- values = ['foo', NA, 'fooommm__foo', 'mmm_', 'foommm[_]+bar']
+ values = np.array(['foo', NA, 'fooommm__foo',
+ 'mmm_', 'foommm[_]+bar'], dtype=np.object_)
pat = 'mmm[_]+'
result = strings.str_contains(values, pat)
- expected = [False, NA, True, True, False]
- tm.assert_almost_equal(result, expected)
+ expected = np.array([False, NA, True, True, False], dtype=np.object_)
+ tm.assert_numpy_array_equal(result, expected)
result = strings.str_contains(values, pat, regex=False)
- expected = [False, NA, False, False, True]
- tm.assert_almost_equal(result, expected)
+ expected = np.array([False, NA, False, False, True], dtype=np.object_)
+ tm.assert_numpy_array_equal(result, expected)
values = ['foo', 'xyz', 'fooommm__foo', 'mmm_']
result = strings.str_contains(values, pat)
- expected = [False, False, True, True]
+ expected = np.array([False, False, True, True])
self.assertEqual(result.dtype, np.bool_)
- tm.assert_almost_equal(result, expected)
+ tm.assert_numpy_array_equal(result, expected)
# case insensitive using regex
values = ['Foo', 'xYz', 'fOOomMm__fOo', 'MMM_']
result = strings.str_contains(values, 'FOO|mmm', case=False)
- expected = [True, False, True, True]
- tm.assert_almost_equal(result, expected)
+ expected = np.array([True, False, True, True])
+ tm.assert_numpy_array_equal(result, expected)
# case insensitive without regex
result = strings.str_contains(values, 'foo', regex=False, case=False)
- expected = [True, False, True, False]
- tm.assert_almost_equal(result, expected)
+ expected = np.array([True, False, True, False])
+ tm.assert_numpy_array_equal(result, expected)
# mixed
mixed = ['a', NA, 'b', True, datetime.today(), 'foo', None, 1, 2.]
rs = strings.str_contains(mixed, 'o')
- xp = Series([False, NA, False, NA, NA, True, NA, NA, NA])
- tm.assert_almost_equal(rs, xp)
+ xp = np.array([False, NA, False, NA, NA, True, NA, NA, NA],
+ dtype=np.object_)
+ tm.assert_numpy_array_equal(rs, xp)
rs = Series(mixed).str.contains('o')
+ xp = Series([False, NA, False, NA, NA, True, NA, NA, NA])
tm.assertIsInstance(rs, Series)
tm.assert_series_equal(rs, xp)
# unicode
- values = [u('foo'), NA, u('fooommm__foo'), u('mmm_')]
+ values = np.array([u'foo', NA, u'fooommm__foo', u'mmm_'],
+ dtype=np.object_)
pat = 'mmm[_]+'
result = strings.str_contains(values, pat)
- expected = [False, np.nan, True, True]
- tm.assert_almost_equal(result, expected)
+ expected = np.array([False, np.nan, True, True], dtype=np.object_)
+ tm.assert_numpy_array_equal(result, expected)
result = strings.str_contains(values, pat, na=False)
- expected = [False, False, True, True]
- tm.assert_almost_equal(result, expected)
+ expected = np.array([False, False, True, True])
+ tm.assert_numpy_array_equal(result, expected)
- values = ['foo', 'xyz', 'fooommm__foo', 'mmm_']
+ values = np.array(['foo', 'xyz', 'fooommm__foo', 'mmm_'],
+ dtype=np.object_)
result = strings.str_contains(values, pat)
- expected = [False, False, True, True]
+ expected = np.array([False, False, True, True])
self.assertEqual(result.dtype, np.bool_)
- tm.assert_almost_equal(result, expected)
+ tm.assert_numpy_array_equal(result, expected)
# na
values = Series(['om', 'foo', np.nan])
@@ -232,13 +240,16 @@ def test_startswith(self):
tm.assert_series_equal(result, exp)
# mixed
- mixed = ['a', NA, 'b', True, datetime.today(), 'foo', None, 1, 2.]
+ mixed = np.array(['a', NA, 'b', True, datetime.today(),
+ 'foo', None, 1, 2.], dtype=np.object_)
rs = strings.str_startswith(mixed, 'f')
- xp = Series([False, NA, False, NA, NA, True, NA, NA, NA])
- tm.assert_almost_equal(rs, xp)
+ xp = np.array([False, NA, False, NA, NA, True, NA, NA, NA],
+ dtype=np.object_)
+ tm.assert_numpy_array_equal(rs, xp)
rs = Series(mixed).str.startswith('f')
tm.assertIsInstance(rs, Series)
+ xp = Series([False, NA, False, NA, NA, True, NA, NA, NA])
tm.assert_series_equal(rs, xp)
# unicode
@@ -262,10 +273,12 @@ def test_endswith(self):
# mixed
mixed = ['a', NA, 'b', True, datetime.today(), 'foo', None, 1, 2.]
rs = strings.str_endswith(mixed, 'f')
- xp = Series([False, NA, False, NA, NA, False, NA, NA, NA])
- tm.assert_almost_equal(rs, xp)
+ xp = np.array([False, NA, False, NA, NA, False, NA, NA, NA],
+ dtype=np.object_)
+ tm.assert_numpy_array_equal(rs, xp)
rs = Series(mixed).str.endswith('f')
+ xp = Series([False, NA, False, NA, NA, False, NA, NA, NA])
tm.assertIsInstance(rs, Series)
tm.assert_series_equal(rs, xp)
@@ -574,7 +587,12 @@ def test_extract_expand_False(self):
s_or_idx = klass(['A1', 'A2'])
result = s_or_idx.str.extract(r'(?P<uno>A)\d', expand=False)
self.assertEqual(result.name, 'uno')
- tm.assert_numpy_array_equal(result, klass(['A', 'A']))
+
+ exp = klass(['A', 'A'], name='uno')
+ if klass == Series:
+ tm.assert_series_equal(result, exp)
+ else:
+ tm.assert_index_equal(result, exp)
s = Series(['A1', 'B2', 'C3'])
# one group, no matches
@@ -713,8 +731,9 @@ def test_extract_expand_True(self):
# single group renames series/index properly
s_or_idx = klass(['A1', 'A2'])
result_df = s_or_idx.str.extract(r'(?P<uno>A)\d', expand=True)
+ tm.assertIsInstance(result_df, DataFrame)
result_series = result_df['uno']
- tm.assert_numpy_array_equal(result_series, klass(['A', 'A']))
+ assert_series_equal(result_series, Series(['A', 'A'], name='uno'))
def test_extract_series(self):
# extract should give the same result whether or not the
@@ -1422,41 +1441,48 @@ def test_find_nan(self):
tm.assert_series_equal(result, Series([4, np.nan, -1, np.nan, -1]))
def test_index(self):
+
+ def _check(result, expected):
+ if isinstance(result, Series):
+ tm.assert_series_equal(result, expected)
+ else:
+ tm.assert_index_equal(result, expected)
+
for klass in [Series, Index]:
s = klass(['ABCDEFG', 'BCDEFEF', 'DEFGHIJEF', 'EFGHEF'])
result = s.str.index('EF')
- tm.assert_numpy_array_equal(result, klass([4, 3, 1, 0]))
+ _check(result, klass([4, 3, 1, 0]))
expected = np.array([v.index('EF') for v in s.values],
dtype=np.int64)
tm.assert_numpy_array_equal(result.values, expected)
result = s.str.rindex('EF')
- tm.assert_numpy_array_equal(result, klass([4, 5, 7, 4]))
+ _check(result, klass([4, 5, 7, 4]))
expected = np.array([v.rindex('EF') for v in s.values],
dtype=np.int64)
tm.assert_numpy_array_equal(result.values, expected)
result = s.str.index('EF', 3)
- tm.assert_numpy_array_equal(result, klass([4, 3, 7, 4]))
+ _check(result, klass([4, 3, 7, 4]))
expected = np.array([v.index('EF', 3) for v in s.values],
dtype=np.int64)
tm.assert_numpy_array_equal(result.values, expected)
result = s.str.rindex('EF', 3)
- tm.assert_numpy_array_equal(result, klass([4, 5, 7, 4]))
+ _check(result, klass([4, 5, 7, 4]))
expected = np.array([v.rindex('EF', 3) for v in s.values],
dtype=np.int64)
tm.assert_numpy_array_equal(result.values, expected)
result = s.str.index('E', 4, 8)
- tm.assert_numpy_array_equal(result, klass([4, 5, 7, 4]))
+ _check(result, klass([4, 5, 7, 4]))
expected = np.array([v.index('E', 4, 8) for v in s.values],
dtype=np.int64)
tm.assert_numpy_array_equal(result.values, expected)
result = s.str.rindex('E', 0, 5)
- tm.assert_numpy_array_equal(result, klass([4, 3, 1, 4]))
+ _check(result, klass([4, 3, 1, 4]))
expected = np.array([v.rindex('E', 0, 5) for v in s.values],
dtype=np.int64)
tm.assert_numpy_array_equal(result.values, expected)
@@ -1471,9 +1497,9 @@ def test_index(self):
# test with nan
s = Series(['abcb', 'ab', 'bcbe', np.nan])
result = s.str.index('b')
- tm.assert_numpy_array_equal(result, Series([1, 1, 0, np.nan]))
+ tm.assert_series_equal(result, Series([1, 1, 0, np.nan]))
result = s.str.rindex('b')
- tm.assert_numpy_array_equal(result, Series([3, 1, 2, np.nan]))
+ tm.assert_series_equal(result, Series([3, 1, 2, np.nan]))
def test_pad(self):
values = Series(['a', 'b', NA, 'c', NA, 'eeeeee'])
@@ -1558,6 +1584,13 @@ def test_pad_fillchar(self):
result = values.str.pad(5, fillchar=5)
def test_translate(self):
+
+ def _check(result, expected):
+ if isinstance(result, Series):
+ tm.assert_series_equal(result, expected)
+ else:
+ tm.assert_index_equal(result, expected)
+
for klass in [Series, Index]:
s = klass(['abcdefg', 'abcc', 'cdddfg', 'cdefggg'])
if not compat.PY3:
@@ -1567,17 +1600,17 @@ def test_translate(self):
table = str.maketrans('abc', 'cde')
result = s.str.translate(table)
expected = klass(['cdedefg', 'cdee', 'edddfg', 'edefggg'])
- tm.assert_numpy_array_equal(result, expected)
+ _check(result, expected)
# use of deletechars is python 2 only
if not compat.PY3:
result = s.str.translate(table, deletechars='fg')
expected = klass(['cdede', 'cdee', 'eddd', 'ede'])
- tm.assert_numpy_array_equal(result, expected)
+ _check(result, expected)
result = s.str.translate(None, deletechars='fg')
expected = klass(['abcde', 'abcc', 'cddd', 'cde'])
- tm.assert_numpy_array_equal(result, expected)
+ _check(result, expected)
else:
with tm.assertRaisesRegexp(
ValueError, "deletechars is not a valid argument"):
@@ -1587,7 +1620,7 @@ def test_translate(self):
s = Series(['a', 'b', 'c', 1.2])
expected = Series(['c', 'd', 'e', np.nan])
result = s.str.translate(table)
- tm.assert_numpy_array_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_center_ljust_rjust(self):
values = Series(['a', 'b', NA, 'c', NA, 'eeeeee'])
@@ -1985,8 +2018,8 @@ def test_rsplit_to_multiindex_expand(self):
idx = Index(['some_equal_splits', 'with_no_nans'])
result = idx.str.rsplit('_', expand=True, n=1)
- exp = MultiIndex.from_tuples([('some_equal', 'splits'), ('with_no',
- 'nans')])
+ exp = MultiIndex.from_tuples([('some_equal', 'splits'),
+ ('with_no', 'nans')])
tm.assert_index_equal(result, exp)
self.assertEqual(result.nlevels, 2)
@@ -1996,7 +2029,7 @@ def test_split_with_name(self):
# should preserve name
s = Series(['a,b', 'c,d'], name='xxx')
res = s.str.split(',')
- exp = Series([('a', 'b'), ('c', 'd')], name='xxx')
+ exp = Series([['a', 'b'], ['c', 'd']], name='xxx')
tm.assert_series_equal(res, exp)
res = s.str.split(',', expand=True)
@@ -2018,60 +2051,60 @@ def test_partition_series(self):
values = Series(['a_b_c', 'c_d_e', NA, 'f_g_h'])
result = values.str.partition('_', expand=False)
- exp = Series([['a', '_', 'b_c'], ['c', '_', 'd_e'], NA, ['f', '_',
- 'g_h']])
+ exp = Series([('a', '_', 'b_c'), ('c', '_', 'd_e'), NA,
+ ('f', '_', 'g_h')])
tm.assert_series_equal(result, exp)
result = values.str.rpartition('_', expand=False)
- exp = Series([['a_b', '_', 'c'], ['c_d', '_', 'e'], NA, ['f_g', '_',
- 'h']])
+ exp = Series([('a_b', '_', 'c'), ('c_d', '_', 'e'), NA,
+ ('f_g', '_', 'h')])
tm.assert_series_equal(result, exp)
# more than one char
values = Series(['a__b__c', 'c__d__e', NA, 'f__g__h'])
result = values.str.partition('__', expand=False)
- exp = Series([['a', '__', 'b__c'], ['c', '__', 'd__e'], NA, ['f', '__',
- 'g__h']])
+ exp = Series([('a', '__', 'b__c'), ('c', '__', 'd__e'), NA,
+ ('f', '__', 'g__h')])
tm.assert_series_equal(result, exp)
result = values.str.rpartition('__', expand=False)
- exp = Series([['a__b', '__', 'c'], ['c__d', '__', 'e'], NA,
- ['f__g', '__', 'h']])
+ exp = Series([('a__b', '__', 'c'), ('c__d', '__', 'e'), NA,
+ ('f__g', '__', 'h')])
tm.assert_series_equal(result, exp)
# None
values = Series(['a b c', 'c d e', NA, 'f g h'])
result = values.str.partition(expand=False)
- exp = Series([['a', ' ', 'b c'], ['c', ' ', 'd e'], NA, ['f', ' ',
- 'g h']])
+ exp = Series([('a', ' ', 'b c'), ('c', ' ', 'd e'), NA,
+ ('f', ' ', 'g h')])
tm.assert_series_equal(result, exp)
result = values.str.rpartition(expand=False)
- exp = Series([['a b', ' ', 'c'], ['c d', ' ', 'e'], NA, ['f g', ' ',
- 'h']])
+ exp = Series([('a b', ' ', 'c'), ('c d', ' ', 'e'), NA,
+ ('f g', ' ', 'h')])
tm.assert_series_equal(result, exp)
# Not splited
values = Series(['abc', 'cde', NA, 'fgh'])
result = values.str.partition('_', expand=False)
- exp = Series([['abc', '', ''], ['cde', '', ''], NA, ['fgh', '', '']])
+ exp = Series([('abc', '', ''), ('cde', '', ''), NA, ('fgh', '', '')])
tm.assert_series_equal(result, exp)
result = values.str.rpartition('_', expand=False)
- exp = Series([['', '', 'abc'], ['', '', 'cde'], NA, ['', '', 'fgh']])
+ exp = Series([('', '', 'abc'), ('', '', 'cde'), NA, ('', '', 'fgh')])
tm.assert_series_equal(result, exp)
# unicode
- values = Series([u('a_b_c'), u('c_d_e'), NA, u('f_g_h')])
+ values = Series([u'a_b_c', u'c_d_e', NA, u'f_g_h'])
result = values.str.partition('_', expand=False)
- exp = Series([[u('a'), u('_'), u('b_c')], [u('c'), u('_'), u('d_e')],
- NA, [u('f'), u('_'), u('g_h')]])
+ exp = Series([(u'a', u'_', u'b_c'), (u'c', u'_', u'd_e'),
+ NA, (u'f', u'_', u'g_h')])
tm.assert_series_equal(result, exp)
result = values.str.rpartition('_', expand=False)
- exp = Series([[u('a_b'), u('_'), u('c')], [u('c_d'), u('_'), u('e')],
- NA, [u('f_g'), u('_'), u('h')]])
+ exp = Series([(u'a_b', u'_', u'c'), (u'c_d', u'_', u'e'),
+ NA, (u'f_g', u'_', u'h')])
tm.assert_series_equal(result, exp)
# compare to standard lib
diff --git a/pandas/tests/test_testing.py b/pandas/tests/test_testing.py
index 357d53cb58c72..9cc76591e9b7b 100644
--- a/pandas/tests/test_testing.py
+++ b/pandas/tests/test_testing.py
@@ -43,6 +43,8 @@ def test_assert_almost_equal_numbers(self):
def test_assert_almost_equal_numbers_with_zeros(self):
self._assert_almost_equal_both(0, 0)
+ self._assert_almost_equal_both(0, 0.0)
+ self._assert_almost_equal_both(0, np.float64(0))
self._assert_almost_equal_both(0.000001, 0)
self._assert_not_almost_equal_both(0.001, 0)
@@ -81,9 +83,11 @@ def __getitem__(self, item):
if item == 'a':
return 1
- self._assert_almost_equal_both({'a': 1}, DictLikeObj())
+ self._assert_almost_equal_both({'a': 1}, DictLikeObj(),
+ check_dtype=False)
- self._assert_not_almost_equal_both({'a': 2}, DictLikeObj())
+ self._assert_not_almost_equal_both({'a': 2}, DictLikeObj(),
+ check_dtype=False)
def test_assert_almost_equal_strings(self):
self._assert_almost_equal_both('abc', 'abc')
@@ -95,7 +99,13 @@ def test_assert_almost_equal_strings(self):
def test_assert_almost_equal_iterables(self):
self._assert_almost_equal_both([1, 2, 3], [1, 2, 3])
- self._assert_almost_equal_both(np.array([1, 2, 3]), [1, 2, 3])
+ self._assert_almost_equal_both(np.array([1, 2, 3]),
+ np.array([1, 2, 3]))
+
+ # class / dtype are different
+ self._assert_not_almost_equal_both(np.array([1, 2, 3]), [1, 2, 3])
+ self._assert_not_almost_equal_both(np.array([1, 2, 3]),
+ np.array([1., 2., 3.]))
# Can't compare generators
self._assert_not_almost_equal_both(iter([1, 2, 3]), [1, 2, 3])
@@ -106,8 +116,8 @@ def test_assert_almost_equal_iterables(self):
def test_assert_almost_equal_null(self):
self._assert_almost_equal_both(None, None)
- self._assert_almost_equal_both(None, np.NaN)
+ self._assert_not_almost_equal_both(None, np.NaN)
self._assert_not_almost_equal_both(None, 0)
self._assert_not_almost_equal_both(np.NaN, 0)
@@ -176,7 +186,7 @@ def test_numpy_array_equal_message(self):
assert_almost_equal(np.array([1, 2]), np.array([3, 4, 5]))
# scalar comparison
- expected = """: 1 != 2"""
+ expected = """Expected type """
with assertRaisesRegexp(AssertionError, expected):
assert_numpy_array_equal(1, 2)
expected = """expected 2\\.00000 but got 1\\.00000, with decimal 5"""
@@ -191,6 +201,7 @@ def test_numpy_array_equal_message(self):
\\[right\\]: int"""
with assertRaisesRegexp(AssertionError, expected):
+ # numpy_array_equal only accepts np.ndarray
assert_numpy_array_equal(np.array([1]), 1)
with assertRaisesRegexp(AssertionError, expected):
assert_almost_equal(np.array([1]), 1)
diff --git a/pandas/tests/test_tseries.py b/pandas/tests/test_tseries.py
index 854b7295aece4..4dd1cf54a5527 100644
--- a/pandas/tests/test_tseries.py
+++ b/pandas/tests/test_tseries.py
@@ -36,7 +36,8 @@ def test_backfill(self):
filler = algos.backfill_int64(old.values, new.values)
- expect_filler = [0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 2, -1]
+ expect_filler = np.array([0, 0, 1, 1, 1, 1,
+ 2, 2, 2, 2, 2, -1], dtype=np.int64)
self.assert_numpy_array_equal(filler, expect_filler)
# corner case
@@ -44,7 +45,7 @@ def test_backfill(self):
new = Index(lrange(5, 10))
filler = algos.backfill_int64(old.values, new.values)
- expect_filler = [-1, -1, -1, -1, -1]
+ expect_filler = np.array([-1, -1, -1, -1, -1], dtype=np.int64)
self.assert_numpy_array_equal(filler, expect_filler)
def test_pad(self):
@@ -53,14 +54,15 @@ def test_pad(self):
filler = algos.pad_int64(old.values, new.values)
- expect_filler = [-1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2]
+ expect_filler = np.array([-1, 0, 0, 0, 0, 1,
+ 1, 1, 1, 1, 2, 2], dtype=np.int64)
self.assert_numpy_array_equal(filler, expect_filler)
# corner case
old = Index([5, 10])
new = Index(lrange(5))
filler = algos.pad_int64(old.values, new.values)
- expect_filler = [-1, -1, -1, -1, -1]
+ expect_filler = np.array([-1, -1, -1, -1, -1], dtype=np.int64)
self.assert_numpy_array_equal(filler, expect_filler)
@@ -113,9 +115,9 @@ def test_inner_join_indexer():
b = np.array([5], dtype=np.int64)
index, ares, bres = algos.inner_join_indexer_int64(a, b)
- assert_almost_equal(index, [5])
- assert_almost_equal(ares, [0])
- assert_almost_equal(bres, [0])
+ tm.assert_numpy_array_equal(index, np.array([5], dtype=np.int64))
+ tm.assert_numpy_array_equal(ares, np.array([0], dtype=np.int64))
+ tm.assert_numpy_array_equal(bres, np.array([0], dtype=np.int64))
def test_outer_join_indexer():
@@ -136,9 +138,9 @@ def test_outer_join_indexer():
b = np.array([5], dtype=np.int64)
index, ares, bres = algos.outer_join_indexer_int64(a, b)
- assert_almost_equal(index, [5])
- assert_almost_equal(ares, [0])
- assert_almost_equal(bres, [0])
+ tm.assert_numpy_array_equal(index, np.array([5], dtype=np.int64))
+ tm.assert_numpy_array_equal(ares, np.array([0], dtype=np.int64))
+ tm.assert_numpy_array_equal(bres, np.array([0], dtype=np.int64))
def test_left_join_indexer():
@@ -158,9 +160,9 @@ def test_left_join_indexer():
b = np.array([5], dtype=np.int64)
index, ares, bres = algos.left_join_indexer_int64(a, b)
- assert_almost_equal(index, [5])
- assert_almost_equal(ares, [0])
- assert_almost_equal(bres, [0])
+ tm.assert_numpy_array_equal(index, np.array([5], dtype=np.int64))
+ tm.assert_numpy_array_equal(ares, np.array([0], dtype=np.int64))
+ tm.assert_numpy_array_equal(bres, np.array([0], dtype=np.int64))
def test_left_join_indexer2():
@@ -494,8 +496,8 @@ def _check(dtype):
bins = np.array([6, 12, 20])
out = np.zeros((3, 4), dtype)
counts = np.zeros(len(out), dtype=np.int64)
- labels = com._ensure_int64(np.repeat(
- np.arange(3), np.diff(np.r_[0, bins])))
+ labels = com._ensure_int64(np.repeat(np.arange(3),
+ np.diff(np.r_[0, bins])))
func = getattr(algos, 'group_ohlc_%s' % dtype)
func(out, counts, obj[:, None], labels)
@@ -505,11 +507,12 @@ def _ohlc(group):
return np.repeat(nan, 4)
return [group[0], group.max(), group.min(), group[-1]]
- expected = np.array([_ohlc(obj[:6]), _ohlc(obj[6:12]), _ohlc(obj[12:])
- ])
+ expected = np.array([_ohlc(obj[:6]), _ohlc(obj[6:12]),
+ _ohlc(obj[12:])])
assert_almost_equal(out, expected)
- assert_almost_equal(counts, [6, 6, 8])
+ tm.assert_numpy_array_equal(counts,
+ np.array([6, 6, 8], dtype=np.int64))
obj[:6] = nan
func(out, counts, obj[:, None], labels)
diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py
index 1185f95dbd51f..2ec419221c6d8 100644
--- a/pandas/tests/test_window.py
+++ b/pandas/tests/test_window.py
@@ -12,10 +12,6 @@
import pandas as pd
from pandas import (Series, DataFrame, Panel, bdate_range, isnull,
notnull, concat)
-from pandas.util.testing import (assert_almost_equal, assert_series_equal,
- assert_frame_equal, assert_panel_equal,
- assert_index_equal, assert_numpy_array_equal,
- slow)
import pandas.core.datetools as datetools
import pandas.stats.moments as mom
import pandas.core.window as rwindow
@@ -27,6 +23,13 @@
N, K = 100, 10
+def assert_equal(left, right):
+ if isinstance(left, Series):
+ tm.assert_series_equal(left, right)
+ else:
+ tm.assert_frame_equal(left, right)
+
+
class Base(tm.TestCase):
_multiprocess_can_split_ = True
@@ -94,11 +97,11 @@ def tests_skip_nuisance(self):
expected = DataFrame({'A': [np.nan, np.nan, 3, 6, 9],
'B': [np.nan, np.nan, 18, 21, 24]},
columns=list('AB'))
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
expected = pd.concat([r[['A', 'B']].sum(), df[['C']]], axis=1)
result = r.sum()
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_agg(self):
df = DataFrame({'A': range(5), 'B': range(0, 10, 2)})
@@ -115,50 +118,51 @@ def test_agg(self):
expected = pd.concat([a_mean, a_std, b_mean, b_std], axis=1)
expected.columns = pd.MultiIndex.from_product([['A', 'B'], ['mean',
'std']])
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = r.aggregate({'A': np.mean, 'B': np.std})
expected = pd.concat([a_mean, b_std], axis=1)
- assert_frame_equal(result, expected, check_like=True)
+ tm.assert_frame_equal(result, expected, check_like=True)
result = r.aggregate({'A': ['mean', 'std']})
expected = pd.concat([a_mean, a_std], axis=1)
expected.columns = pd.MultiIndex.from_tuples([('A', 'mean'), ('A',
'std')])
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = r['A'].aggregate(['mean', 'sum'])
expected = pd.concat([a_mean, a_sum], axis=1)
expected.columns = ['mean', 'sum']
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = r.aggregate({'A': {'mean': 'mean', 'sum': 'sum'}})
expected = pd.concat([a_mean, a_sum], axis=1)
- expected.columns = pd.MultiIndex.from_tuples([('A', 'mean'), ('A',
- 'sum')])
- assert_frame_equal(result, expected, check_like=True)
+ expected.columns = pd.MultiIndex.from_tuples([('A', 'mean'),
+ ('A', 'sum')])
+ tm.assert_frame_equal(result, expected, check_like=True)
result = r.aggregate({'A': {'mean': 'mean',
'sum': 'sum'},
'B': {'mean2': 'mean',
'sum2': 'sum'}})
expected = pd.concat([a_mean, a_sum, b_mean, b_sum], axis=1)
- expected.columns = pd.MultiIndex.from_tuples([('A', 'mean'), (
- 'A', 'sum'), ('B', 'mean2'), ('B', 'sum2')])
- assert_frame_equal(result, expected, check_like=True)
+ exp_cols = [('A', 'mean'), ('A', 'sum'), ('B', 'mean2'), ('B', 'sum2')]
+ expected.columns = pd.MultiIndex.from_tuples(exp_cols)
+ tm.assert_frame_equal(result, expected, check_like=True)
result = r.aggregate({'A': ['mean', 'std'], 'B': ['mean', 'std']})
expected = pd.concat([a_mean, a_std, b_mean, b_std], axis=1)
- expected.columns = pd.MultiIndex.from_tuples([('A', 'mean'), (
- 'A', 'std'), ('B', 'mean'), ('B', 'std')])
- assert_frame_equal(result, expected, check_like=True)
+
+ exp_cols = [('A', 'mean'), ('A', 'std'), ('B', 'mean'), ('B', 'std')]
+ expected.columns = pd.MultiIndex.from_tuples(exp_cols)
+ tm.assert_frame_equal(result, expected, check_like=True)
# passed lambda
result = r.agg({'A': np.sum, 'B': lambda x: np.std(x, ddof=1)})
rcustom = r['B'].apply(lambda x: np.std(x, ddof=1))
expected = pd.concat([a_sum, rcustom], axis=1)
- assert_frame_equal(result, expected, check_like=True)
+ tm.assert_frame_equal(result, expected, check_like=True)
def test_agg_consistency(self):
@@ -195,13 +199,13 @@ def f():
'ra', 'std'), ('rb', 'mean'), ('rb', 'std')])
result = r[['A', 'B']].agg({'A': {'ra': ['mean', 'std']},
'B': {'rb': ['mean', 'std']}})
- assert_frame_equal(result, expected, check_like=True)
+ tm.assert_frame_equal(result, expected, check_like=True)
result = r.agg({'A': {'ra': ['mean', 'std']},
'B': {'rb': ['mean', 'std']}})
expected.columns = pd.MultiIndex.from_tuples([('A', 'ra', 'mean'), (
'A', 'ra', 'std'), ('B', 'rb', 'mean'), ('B', 'rb', 'std')])
- assert_frame_equal(result, expected, check_like=True)
+ tm.assert_frame_equal(result, expected, check_like=True)
def test_window_with_args(self):
tm._skip_if_no_scipy()
@@ -213,7 +217,7 @@ def test_window_with_args(self):
expected.columns = ['<lambda>', '<lambda>']
result = r.aggregate([lambda x: x.mean(std=10),
lambda x: x.mean(std=.01)])
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def a(x):
return x.mean(std=10)
@@ -224,7 +228,7 @@ def b(x):
expected = pd.concat([r.mean(std=10), r.mean(std=.01)], axis=1)
expected.columns = ['a', 'b']
result = r.aggregate([a, b])
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_preserve_metadata(self):
# GH 10565
@@ -262,7 +266,7 @@ def test_how_compat(self):
expected = getattr(
getattr(s, t)(freq='D', **kwargs), op)(how=how)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
class TestWindow(Base):
@@ -555,7 +559,7 @@ def test_dtypes(self):
def check_dtypes(self, f, f_name, d, d_name, exp):
roll = d.rolling(window=self.window)
result = f(roll)
- assert_almost_equal(result, exp)
+ tm.assert_almost_equal(result, exp)
class TestDtype_object(Dtype):
@@ -642,7 +646,7 @@ def check_dtypes(self, f, f_name, d, d_name, exp):
if f_name == 'count':
result = f(roll)
- assert_almost_equal(result, exp)
+ tm.assert_almost_equal(result, exp)
else:
@@ -714,11 +718,11 @@ def test_cmov_mean(self):
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
rs = mom.rolling_mean(vals, 5, center=True)
- assert_almost_equal(xp, rs)
+ tm.assert_almost_equal(xp, rs)
xp = Series(rs)
rs = Series(vals).rolling(5, center=True).mean()
- assert_series_equal(xp, rs)
+ tm.assert_series_equal(xp, rs)
def test_cmov_window(self):
# GH 8238
@@ -731,11 +735,11 @@ def test_cmov_window(self):
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
rs = mom.rolling_window(vals, 5, 'boxcar', center=True)
- assert_almost_equal(xp, rs)
+ tm.assert_almost_equal(xp, rs)
xp = Series(rs)
rs = Series(vals).rolling(5, win_type='boxcar', center=True).mean()
- assert_series_equal(xp, rs)
+ tm.assert_series_equal(xp, rs)
def test_cmov_window_corner(self):
# GH 8238
@@ -777,7 +781,7 @@ def test_cmov_window_frame(self):
# DataFrame
rs = DataFrame(vals).rolling(5, win_type='boxcar', center=True).mean()
- assert_frame_equal(DataFrame(xp), rs)
+ tm.assert_frame_equal(DataFrame(xp), rs)
# invalid method
with self.assertRaises(AttributeError):
@@ -791,7 +795,7 @@ def test_cmov_window_frame(self):
], [np.nan, np.nan]])
rs = DataFrame(vals).rolling(5, win_type='boxcar', center=True).sum()
- assert_frame_equal(DataFrame(xp), rs)
+ tm.assert_frame_equal(DataFrame(xp), rs)
def test_cmov_window_na_min_periods(self):
tm._skip_if_no_scipy()
@@ -804,7 +808,7 @@ def test_cmov_window_na_min_periods(self):
xp = vals.rolling(5, min_periods=4, center=True).mean()
rs = vals.rolling(5, win_type='boxcar', min_periods=4,
center=True).mean()
- assert_series_equal(xp, rs)
+ tm.assert_series_equal(xp, rs)
def test_cmov_window_regular(self):
# GH 8238
@@ -837,7 +841,7 @@ def test_cmov_window_regular(self):
for wt in win_types:
xp = Series(xps[wt])
rs = Series(vals).rolling(5, win_type=wt, center=True).mean()
- assert_series_equal(xp, rs)
+ tm.assert_series_equal(xp, rs)
def test_cmov_window_regular_linear_range(self):
# GH 8238
@@ -854,7 +858,7 @@ def test_cmov_window_regular_linear_range(self):
for wt in win_types:
rs = Series(vals).rolling(5, win_type=wt, center=True).mean()
- assert_series_equal(xp, rs)
+ tm.assert_series_equal(xp, rs)
def test_cmov_window_regular_missing_data(self):
# GH 8238
@@ -887,7 +891,7 @@ def test_cmov_window_regular_missing_data(self):
for wt in win_types:
xp = Series(xps[wt])
rs = Series(vals).rolling(5, win_type=wt, min_periods=3).mean()
- assert_series_equal(xp, rs)
+ tm.assert_series_equal(xp, rs)
def test_cmov_window_special(self):
# GH 8238
@@ -914,7 +918,7 @@ def test_cmov_window_special(self):
for wt, k in zip(win_types, kwds):
xp = Series(xps[wt])
rs = Series(vals).rolling(5, win_type=wt, center=True).mean(**k)
- assert_series_equal(xp, rs)
+ tm.assert_series_equal(xp, rs)
def test_cmov_window_special_linear_range(self):
# GH 8238
@@ -932,7 +936,7 @@ def test_cmov_window_special_linear_range(self):
for wt, k in zip(win_types, kwds):
rs = Series(vals).rolling(5, win_type=wt, center=True).mean(**k)
- assert_series_equal(xp, rs)
+ tm.assert_series_equal(xp, rs)
def test_rolling_median(self):
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
@@ -946,7 +950,7 @@ def test_rolling_min(self):
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
a = np.array([1, 2, 3, 4, 5])
b = mom.rolling_min(a, window=100, min_periods=1)
- assert_almost_equal(b, np.ones(len(a)))
+ tm.assert_almost_equal(b, np.ones(len(a)))
self.assertRaises(ValueError, mom.rolling_min, np.array([1, 2, 3]),
window=3, min_periods=5)
@@ -958,7 +962,7 @@ def test_rolling_max(self):
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
a = np.array([1, 2, 3, 4, 5], dtype=np.float64)
b = mom.rolling_max(a, window=100, min_periods=1)
- assert_almost_equal(a, b)
+ tm.assert_almost_equal(a, b)
self.assertRaises(ValueError, mom.rolling_max, np.array([1, 2, 3]),
window=3, min_periods=5)
@@ -994,7 +998,8 @@ def test_rolling_apply(self):
category=RuntimeWarning)
ser = Series([])
- assert_series_equal(ser, ser.rolling(10).apply(lambda x: x.mean()))
+ tm.assert_series_equal(ser,
+ ser.rolling(10).apply(lambda x: x.mean()))
f = lambda x: x[np.isfinite(x)].mean()
@@ -1010,10 +1015,10 @@ def roll_mean(x, window, min_periods=None, freq=None, center=False,
s = Series([None, None, None])
result = s.rolling(2, min_periods=0).apply(lambda x: len(x))
expected = Series([1., 2., 2.])
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
result = s.rolling(2, min_periods=0).apply(len)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_rolling_apply_out_of_bounds(self):
# #1850
@@ -1026,7 +1031,7 @@ def test_rolling_apply_out_of_bounds(self):
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
result = mom.rolling_apply(arr, 10, np.sum, min_periods=1)
- assert_almost_equal(result, result)
+ tm.assert_almost_equal(result, result)
def test_rolling_std(self):
self._check_moment_func(mom.rolling_std, lambda x: np.std(x, ddof=1),
@@ -1039,13 +1044,13 @@ def test_rolling_std_1obs(self):
result = mom.rolling_std(np.array([1., 2., 3., 4., 5.]),
1, min_periods=1)
expected = np.array([np.nan] * 5)
- assert_almost_equal(result, expected)
+ tm.assert_almost_equal(result, expected)
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
result = mom.rolling_std(np.array([1., 2., 3., 4., 5.]),
1, min_periods=1, ddof=0)
expected = np.zeros(5)
- assert_almost_equal(result, expected)
+ tm.assert_almost_equal(result, expected)
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
result = mom.rolling_std(np.array([np.nan, np.nan, 3., 4., 5.]),
@@ -1159,7 +1164,7 @@ def get_result(arr, window, min_periods=None, center=False):
kwargs)
result = get_result(self.arr, window)
- assert_almost_equal(result[-1], static_comp(self.arr[-50:]))
+ tm.assert_almost_equal(result[-1], static_comp(self.arr[-50:]))
if preserve_nan:
assert (np.isnan(result[self._nan_locs]).all())
@@ -1171,7 +1176,7 @@ def get_result(arr, window, min_periods=None, center=False):
if has_min_periods:
result = get_result(arr, 50, min_periods=30)
- assert_almost_equal(result[-1], static_comp(arr[10:-10]))
+ tm.assert_almost_equal(result[-1], static_comp(arr[10:-10]))
# min_periods is working correctly
result = get_result(arr, 20, min_periods=15)
@@ -1189,10 +1194,10 @@ def get_result(arr, window, min_periods=None, center=False):
# min_periods=0
result0 = get_result(arr, 20, min_periods=0)
result1 = get_result(arr, 20, min_periods=1)
- assert_almost_equal(result0, result1)
+ tm.assert_almost_equal(result0, result1)
else:
result = get_result(arr, 50)
- assert_almost_equal(result[-1], static_comp(arr[10:-10]))
+ tm.assert_almost_equal(result[-1], static_comp(arr[10:-10]))
# GH 7925
if has_center:
@@ -1210,7 +1215,8 @@ def get_result(arr, window, min_periods=None, center=False):
if test_stable:
result = get_result(self.arr + 1e9, window)
- assert_almost_equal(result[-1], static_comp(self.arr[-50:] + 1e9))
+ tm.assert_almost_equal(result[-1],
+ static_comp(self.arr[-50:] + 1e9))
# Test window larger than array, #7297
if test_window:
@@ -1224,14 +1230,15 @@ def get_result(arr, window, min_periods=None, center=False):
self.assertTrue(np.array_equal(nan_mask, np.isnan(
expected)))
nan_mask = ~nan_mask
- assert_almost_equal(result[nan_mask], expected[nan_mask])
+ tm.assert_almost_equal(result[nan_mask],
+ expected[nan_mask])
else:
result = get_result(self.arr, len(self.arr) + 1)
expected = get_result(self.arr, len(self.arr))
nan_mask = np.isnan(result)
self.assertTrue(np.array_equal(nan_mask, np.isnan(expected)))
nan_mask = ~nan_mask
- assert_almost_equal(result[nan_mask], expected[nan_mask])
+ tm.assert_almost_equal(result[nan_mask], expected[nan_mask])
def _check_structures(self, f, static_comp, name=None,
has_min_periods=True, has_time_rule=True,
@@ -1283,11 +1290,12 @@ def get_result(obj, window, min_periods=None, freq=None, center=False):
trunc_series = self.series[::2].truncate(prev_date, last_date)
trunc_frame = self.frame[::2].truncate(prev_date, last_date)
- assert_almost_equal(series_result[-1], static_comp(trunc_series))
+ self.assertAlmostEqual(series_result[-1],
+ static_comp(trunc_series))
- assert_series_equal(frame_result.xs(last_date),
- trunc_frame.apply(static_comp),
- check_names=False)
+ tm.assert_series_equal(frame_result.xs(last_date),
+ trunc_frame.apply(static_comp),
+ check_names=False)
# GH 7925
if has_center:
@@ -1326,8 +1334,8 @@ def get_result(obj, window, min_periods=None, freq=None, center=False):
if fill_value is not None:
series_xp = series_xp.fillna(fill_value)
frame_xp = frame_xp.fillna(fill_value)
- assert_series_equal(series_xp, series_rs)
- assert_frame_equal(frame_xp, frame_rs)
+ tm.assert_series_equal(series_xp, series_rs)
+ tm.assert_frame_equal(frame_xp, frame_rs)
def test_ewma(self):
self._check_ew(mom.ewma, name='mean')
@@ -1347,7 +1355,7 @@ def test_ewma(self):
lambda s: s.ewm(com=2.0, adjust=True, ignore_na=True).mean(),
]:
result = f(s)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
expected = Series([1.0, 1.333333, 2.222222, 4.148148])
for f in [lambda s: s.ewm(com=2.0, adjust=False).mean(),
@@ -1357,7 +1365,7 @@ def test_ewma(self):
ignore_na=True).mean(),
]:
result = f(s)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_ewma_nan_handling(self):
s = Series([1.] + [np.nan] * 5 + [1.])
@@ -1408,11 +1416,11 @@ def simple_wma(s, w):
expected = simple_wma(s, Series(w))
result = s.ewm(com=com, adjust=adjust, ignore_na=ignore_na).mean()
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
if ignore_na is False:
# check that ignore_na defaults to False
result = s.ewm(com=com, adjust=adjust).mean()
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_ewmvar(self):
self._check_ew(mom.ewmvar, name='var')
@@ -1424,7 +1432,7 @@ def test_ewma_span_com_args(self):
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
A = mom.ewma(self.arr, com=9.5)
B = mom.ewma(self.arr, span=20)
- assert_almost_equal(A, B)
+ tm.assert_almost_equal(A, B)
self.assertRaises(ValueError, mom.ewma, self.arr, com=9.5, span=20)
self.assertRaises(ValueError, mom.ewma, self.arr)
@@ -1433,7 +1441,7 @@ def test_ewma_halflife_arg(self):
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
A = mom.ewma(self.arr, com=13.932726172912965)
B = mom.ewma(self.arr, halflife=10.0)
- assert_almost_equal(A, B)
+ tm.assert_almost_equal(A, B)
self.assertRaises(ValueError, mom.ewma, self.arr, span=20,
halflife=50)
@@ -1450,9 +1458,9 @@ def test_ewma_alpha_old_api(self):
b = mom.ewma(self.arr, com=0.62014947789973052)
c = mom.ewma(self.arr, span=2.240298955799461)
d = mom.ewma(self.arr, halflife=0.721792864318)
- assert_numpy_array_equal(a, b)
- assert_numpy_array_equal(a, c)
- assert_numpy_array_equal(a, d)
+ tm.assert_numpy_array_equal(a, b)
+ tm.assert_numpy_array_equal(a, c)
+ tm.assert_numpy_array_equal(a, d)
def test_ewma_alpha_arg_old_api(self):
# GH 10789
@@ -1472,9 +1480,9 @@ def test_ewm_alpha(self):
b = s.ewm(com=0.62014947789973052).mean()
c = s.ewm(span=2.240298955799461).mean()
d = s.ewm(halflife=0.721792864318).mean()
- assert_series_equal(a, b)
- assert_series_equal(a, c)
- assert_series_equal(a, d)
+ tm.assert_series_equal(a, b)
+ tm.assert_series_equal(a, c)
+ tm.assert_series_equal(a, d)
def test_ewm_alpha_arg(self):
# GH 10789
@@ -1516,7 +1524,7 @@ def test_ew_empty_arrays(self):
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
result = f(arr, 3)
- assert_almost_equal(result, arr)
+ tm.assert_almost_equal(result, arr)
def _check_ew(self, func, name=None):
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
@@ -1553,16 +1561,16 @@ def _check_ew_ndarray(self, func, preserve_nan=False, name=None):
# check series of length 0
result = func(Series([]), 50, min_periods=min_periods)
- assert_series_equal(result, Series([]))
+ tm.assert_series_equal(result, Series([]))
# check series of length 1
result = func(Series([1.]), 50, min_periods=min_periods)
if func == mom.ewma:
- assert_series_equal(result, Series([1.]))
+ tm.assert_series_equal(result, Series([1.]))
else:
# ewmstd, ewmvol, ewmvar with bias=False require at least two
# values
- assert_series_equal(result, Series([np.NaN]))
+ tm.assert_series_equal(result, Series([np.NaN]))
# pass in ints
result2 = func(np.arange(50), span=10)
@@ -1694,8 +1702,6 @@ def _non_null_values(x):
return set(values[notnull(values)].tolist())
for (x, is_constant, no_nans) in self.data:
- assert_equal = assert_series_equal if isinstance(
- x, Series) else assert_frame_equal
count_x = count(x)
mean_x = mean(x)
@@ -1800,7 +1806,7 @@ def _non_null_values(x):
assert_equal(cov_x_y, mean_x_times_y -
(mean_x * mean_y))
- @slow
+ @tm.slow
def test_ewm_consistency(self):
def _weights(s, com, adjust, ignore_na):
if isinstance(s, DataFrame):
@@ -1899,7 +1905,7 @@ def _ewma(s, com, min_periods, adjust, ignore_na):
_variance_debiasing_factors(x, com=com, adjust=adjust,
ignore_na=ignore_na)))
- @slow
+ @tm.slow
def test_expanding_consistency(self):
# suppress warnings about empty slices, as we are deliberately testing
@@ -1942,8 +1948,6 @@ def test_expanding_consistency(self):
# expanding_apply of Series.xyz(), or (b) expanding_apply of
# np.nanxyz()
for (x, is_constant, no_nans) in self.data:
- assert_equal = assert_series_equal if isinstance(
- x, Series) else assert_frame_equal
functions = self.base_functions
# GH 8269
@@ -1988,9 +1992,9 @@ def test_expanding_consistency(self):
x.iloc[:, i].expanding(
min_periods=min_periods),
name)(x.iloc[:, j])
- assert_panel_equal(expanding_f_result, expected)
+ tm.assert_panel_equal(expanding_f_result, expected)
- @slow
+ @tm.slow
def test_rolling_consistency(self):
# suppress warnings about empty slices, as we are deliberately testing
@@ -2062,10 +2066,6 @@ def cases():
# rolling_apply of Series.xyz(), or (b) rolling_apply of
# np.nanxyz()
for (x, is_constant, no_nans) in self.data:
-
- assert_equal = (assert_series_equal
- if isinstance(x, Series) else
- assert_frame_equal)
functions = self.base_functions
# GH 8269
@@ -2116,7 +2116,7 @@ def cases():
min_periods=min_periods,
center=center),
name)(x.iloc[:, j]))
- assert_panel_equal(rolling_f_result, expected)
+ tm.assert_panel_equal(rolling_f_result, expected)
# binary moments
def test_rolling_cov(self):
@@ -2124,7 +2124,7 @@ def test_rolling_cov(self):
B = A + randn(len(A))
result = A.rolling(window=50, min_periods=25).cov(B)
- assert_almost_equal(result[-1], np.cov(A[-50:], B[-50:])[0, 1])
+ tm.assert_almost_equal(result[-1], np.cov(A[-50:], B[-50:])[0, 1])
def test_rolling_cov_pairwise(self):
self._check_pairwise_moment('rolling', 'cov', window=10, min_periods=5)
@@ -2134,7 +2134,7 @@ def test_rolling_corr(self):
B = A + randn(len(A))
result = A.rolling(window=50, min_periods=25).corr(B)
- assert_almost_equal(result[-1], np.corrcoef(A[-50:], B[-50:])[0, 1])
+ tm.assert_almost_equal(result[-1], np.corrcoef(A[-50:], B[-50:])[0, 1])
# test for correct bias correction
a = tm.makeTimeSeries()
@@ -2143,7 +2143,7 @@ def test_rolling_corr(self):
b[:10] = np.nan
result = a.rolling(window=len(a), min_periods=1).corr(b)
- assert_almost_equal(result[-1], a.corr(b))
+ tm.assert_almost_equal(result[-1], a.corr(b))
def test_rolling_corr_pairwise(self):
self._check_pairwise_moment('rolling', 'corr', window=10,
@@ -2244,18 +2244,18 @@ def func(A, B, com, **kwargs):
# check series of length 0
result = func(Series([]), Series([]), 50, min_periods=min_periods)
- assert_series_equal(result, Series([]))
+ tm.assert_series_equal(result, Series([]))
# check series of length 1
result = func(
Series([1.]), Series([1.]), 50, min_periods=min_periods)
- assert_series_equal(result, Series([np.NaN]))
+ tm.assert_series_equal(result, Series([np.NaN]))
self.assertRaises(Exception, func, A, randn(50), 20, min_periods=5)
def test_expanding_apply(self):
ser = Series([])
- assert_series_equal(ser, ser.expanding().apply(lambda x: x.mean()))
+ tm.assert_series_equal(ser, ser.expanding().apply(lambda x: x.mean()))
def expanding_mean(x, min_periods=1, freq=None):
return mom.expanding_apply(x, lambda x: x.mean(),
@@ -2267,7 +2267,7 @@ def expanding_mean(x, min_periods=1, freq=None):
s = Series([None, None, None])
result = s.expanding(min_periods=0).apply(lambda x: len(x))
expected = Series([1., 2., 3.])
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_expanding_apply_args_kwargs(self):
def mean_w_arg(x, const):
@@ -2277,11 +2277,11 @@ def mean_w_arg(x, const):
expected = df.expanding().apply(np.mean) + 20.
- assert_frame_equal(df.expanding().apply(mean_w_arg, args=(20, )),
- expected)
- assert_frame_equal(df.expanding().apply(mean_w_arg,
- kwargs={'const': 20}),
- expected)
+ tm.assert_frame_equal(df.expanding().apply(mean_w_arg, args=(20, )),
+ expected)
+ tm.assert_frame_equal(df.expanding().apply(mean_w_arg,
+ kwargs={'const': 20}),
+ expected)
def test_expanding_corr(self):
A = self.series.dropna()
@@ -2291,11 +2291,11 @@ def test_expanding_corr(self):
rolling_result = A.rolling(window=len(A), min_periods=1).corr(B)
- assert_almost_equal(rolling_result, result)
+ tm.assert_almost_equal(rolling_result, result)
def test_expanding_count(self):
result = self.series.expanding().count()
- assert_almost_equal(result, self.series.rolling(
+ tm.assert_almost_equal(result, self.series.rolling(
window=len(self.series)).count())
def test_expanding_quantile(self):
@@ -2304,7 +2304,7 @@ def test_expanding_quantile(self):
rolling_result = self.series.rolling(window=len(self.series),
min_periods=1).quantile(0.5)
- assert_almost_equal(result, rolling_result)
+ tm.assert_almost_equal(result, rolling_result)
def test_expanding_cov(self):
A = self.series
@@ -2314,7 +2314,7 @@ def test_expanding_cov(self):
rolling_result = A.rolling(window=len(A), min_periods=1).cov(B)
- assert_almost_equal(rolling_result, result)
+ tm.assert_almost_equal(rolling_result, result)
def test_expanding_max(self):
self._check_expanding(mom.expanding_max, np.max, preserve_nan=False)
@@ -2326,7 +2326,7 @@ def test_expanding_cov_pairwise(self):
min_periods=1).corr()
for i in result.items:
- assert_almost_equal(result[i], rolling_result[i])
+ tm.assert_almost_equal(result[i], rolling_result[i])
def test_expanding_corr_pairwise(self):
result = self.frame.expanding().corr()
@@ -2335,7 +2335,7 @@ def test_expanding_corr_pairwise(self):
min_periods=1).corr()
for i in result.items:
- assert_almost_equal(result[i], rolling_result[i])
+ tm.assert_almost_equal(result[i], rolling_result[i])
def test_expanding_cov_diff_index(self):
# GH 7512
@@ -2343,17 +2343,17 @@ def test_expanding_cov_diff_index(self):
s2 = Series([1, 3], index=[0, 2])
result = s1.expanding().cov(s2)
expected = Series([None, None, 2.0])
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
s2a = Series([1, None, 3], index=[0, 1, 2])
result = s1.expanding().cov(s2a)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
s1 = Series([7, 8, 10], index=[0, 1, 3])
s2 = Series([7, 9, 10], index=[0, 2, 3])
result = s1.expanding().cov(s2)
expected = Series([None, None, None, 4.5])
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_expanding_corr_diff_index(self):
# GH 7512
@@ -2361,17 +2361,17 @@ def test_expanding_corr_diff_index(self):
s2 = Series([1, 3], index=[0, 2])
result = s1.expanding().corr(s2)
expected = Series([None, None, 1.0])
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
s2a = Series([1, None, 3], index=[0, 1, 2])
result = s1.expanding().corr(s2a)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
s1 = Series([7, 8, 10], index=[0, 1, 3])
s2 = Series([7, 9, 10], index=[0, 2, 3])
result = s1.expanding().corr(s2)
expected = Series([None, None, None, 1.])
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_rolling_cov_diff_length(self):
# GH 7512
@@ -2379,11 +2379,11 @@ def test_rolling_cov_diff_length(self):
s2 = Series([1, 3], index=[0, 2])
result = s1.rolling(window=3, min_periods=2).cov(s2)
expected = Series([None, None, 2.0])
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
s2a = Series([1, None, 3], index=[0, 1, 2])
result = s1.rolling(window=3, min_periods=2).cov(s2a)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_rolling_corr_diff_length(self):
# GH 7512
@@ -2391,11 +2391,11 @@ def test_rolling_corr_diff_length(self):
s2 = Series([1, 3], index=[0, 2])
result = s1.rolling(window=3, min_periods=2).corr(s2)
expected = Series([None, None, 1.0])
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
s2a = Series([1, None, 3], index=[0, 1, 2])
result = s1.rolling(window=3, min_periods=2).corr(s2a)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_rolling_functions_window_non_shrinkage(self):
# GH 7764
@@ -2427,10 +2427,10 @@ def test_rolling_functions_window_non_shrinkage(self):
for f in functions:
try:
s_result = f(s)
- assert_series_equal(s_result, s_expected)
+ tm.assert_series_equal(s_result, s_expected)
df_result = f(df)
- assert_frame_equal(df_result, df_expected)
+ tm.assert_frame_equal(df_result, df_expected)
except (ImportError):
# scipy needed for rolling_window
@@ -2442,7 +2442,7 @@ def test_rolling_functions_window_non_shrinkage(self):
.corr(x, pairwise=True))]
for f in functions:
df_result_panel = f(df)
- assert_panel_equal(df_result_panel, df_expected_panel)
+ tm.assert_panel_equal(df_result_panel, df_expected_panel)
def test_moment_functions_zero_length(self):
# GH 8056
@@ -2497,13 +2497,13 @@ def test_moment_functions_zero_length(self):
for f in functions:
try:
s_result = f(s)
- assert_series_equal(s_result, s_expected)
+ tm.assert_series_equal(s_result, s_expected)
df1_result = f(df1)
- assert_frame_equal(df1_result, df1_expected)
+ tm.assert_frame_equal(df1_result, df1_expected)
df2_result = f(df2)
- assert_frame_equal(df2_result, df2_expected)
+ tm.assert_frame_equal(df2_result, df2_expected)
except (ImportError):
# scipy needed for rolling_window
@@ -2520,10 +2520,10 @@ def test_moment_functions_zero_length(self):
]
for f in functions:
df1_result_panel = f(df1)
- assert_panel_equal(df1_result_panel, df1_expected_panel)
+ tm.assert_panel_equal(df1_result_panel, df1_expected_panel)
df2_result_panel = f(df2)
- assert_panel_equal(df2_result_panel, df2_expected_panel)
+ tm.assert_panel_equal(df2_result_panel, df2_expected_panel)
def test_expanding_cov_pairwise_diff_length(self):
# GH 7512
@@ -2537,10 +2537,10 @@ def test_expanding_cov_pairwise_diff_length(self):
result4 = df1a.expanding().cov(df2a, pairwise=True)[2]
expected = DataFrame([[-3., -5.], [-6., -10.]], index=['A', 'B'],
columns=['X', 'Y'])
- assert_frame_equal(result1, expected)
- assert_frame_equal(result2, expected)
- assert_frame_equal(result3, expected)
- assert_frame_equal(result4, expected)
+ tm.assert_frame_equal(result1, expected)
+ tm.assert_frame_equal(result2, expected)
+ tm.assert_frame_equal(result3, expected)
+ tm.assert_frame_equal(result4, expected)
def test_expanding_corr_pairwise_diff_length(self):
# GH 7512
@@ -2554,35 +2554,29 @@ def test_expanding_corr_pairwise_diff_length(self):
result4 = df1a.expanding().corr(df2a, pairwise=True)[2]
expected = DataFrame([[-1.0, -1.0], [-1.0, -1.0]], index=['A', 'B'],
columns=['X', 'Y'])
- assert_frame_equal(result1, expected)
- assert_frame_equal(result2, expected)
- assert_frame_equal(result3, expected)
- assert_frame_equal(result4, expected)
+ tm.assert_frame_equal(result1, expected)
+ tm.assert_frame_equal(result2, expected)
+ tm.assert_frame_equal(result3, expected)
+ tm.assert_frame_equal(result4, expected)
def test_pairwise_stats_column_names_order(self):
# GH 7738
df1s = [DataFrame([[2, 4], [1, 2], [5, 2], [8, 1]], columns=[0, 1]),
- DataFrame(
- [[2, 4], [1, 2], [5, 2], [8, 1]], columns=[1, 0]),
- DataFrame(
- [[2, 4], [1, 2], [5, 2], [8, 1]], columns=[1, 1]),
- DataFrame(
- [[2, 4], [1, 2], [5, 2], [8, 1]], columns=['C', 'C']),
- DataFrame(
- [[2, 4], [1, 2], [5, 2], [8, 1]], columns=[1., 0]),
- DataFrame(
- [[2, 4], [1, 2], [5, 2], [8, 1]], columns=[0., 1]),
- DataFrame(
- [[2, 4], [1, 2], [5, 2], [8, 1]], columns=['C', 1]),
- DataFrame(
- [[2., 4.], [1., 2.], [5., 2.], [8., 1.]], columns=[1, 0.]),
- DataFrame(
- [[2, 4.], [1, 2.], [5, 2.], [8, 1.]], columns=[0, 1.]),
- DataFrame(
- [[2, 4], [1, 2], [5, 2], [8, 1.]], columns=[1., 'X']), ]
- df2 = DataFrame(
- [[None, 1, 1], [None, 1, 2], [None, 3, 2], [None, 8, 1]
- ], columns=['Y', 'Z', 'X'])
+ DataFrame([[2, 4], [1, 2], [5, 2], [8, 1]], columns=[1, 0]),
+ DataFrame([[2, 4], [1, 2], [5, 2], [8, 1]], columns=[1, 1]),
+ DataFrame([[2, 4], [1, 2], [5, 2], [8, 1]],
+ columns=['C', 'C']),
+ DataFrame([[2, 4], [1, 2], [5, 2], [8, 1]], columns=[1., 0]),
+ DataFrame([[2, 4], [1, 2], [5, 2], [8, 1]], columns=[0., 1]),
+ DataFrame([[2, 4], [1, 2], [5, 2], [8, 1]], columns=['C', 1]),
+ DataFrame([[2., 4.], [1., 2.], [5., 2.], [8., 1.]],
+ columns=[1, 0.]),
+ DataFrame([[2, 4.], [1, 2.], [5, 2.], [8, 1.]],
+ columns=[0, 1.]),
+ DataFrame([[2, 4], [1, 2], [5, 2], [8, 1.]],
+ columns=[1., 'X']), ]
+ df2 = DataFrame([[None, 1, 1], [None, 1, 2],
+ [None, 3, 2], [None, 8, 1]], columns=['Y', 'Z', 'X'])
s = Series([1, 1, 3, 8])
# suppress warnings about incomparable objects, as we are deliberately
@@ -2596,11 +2590,13 @@ def test_pairwise_stats_column_names_order(self):
for f in [lambda x: x.cov(), lambda x: x.corr(), ]:
results = [f(df) for df in df1s]
for (df, result) in zip(df1s, results):
- assert_index_equal(result.index, df.columns)
- assert_index_equal(result.columns, df.columns)
+ tm.assert_index_equal(result.index, df.columns)
+ tm.assert_index_equal(result.columns, df.columns)
for i, result in enumerate(results):
if i > 0:
- self.assert_numpy_array_equal(result, results[0])
+ # compare internal values, as columns can be different
+ self.assert_numpy_array_equal(result.values,
+ results[0].values)
# DataFrame with itself, pairwise=True
for f in [lambda x: x.expanding().cov(pairwise=True),
@@ -2611,12 +2607,13 @@ def test_pairwise_stats_column_names_order(self):
lambda x: x.ewm(com=3).corr(pairwise=True), ]:
results = [f(df) for df in df1s]
for (df, result) in zip(df1s, results):
- assert_index_equal(result.items, df.index)
- assert_index_equal(result.major_axis, df.columns)
- assert_index_equal(result.minor_axis, df.columns)
+ tm.assert_index_equal(result.items, df.index)
+ tm.assert_index_equal(result.major_axis, df.columns)
+ tm.assert_index_equal(result.minor_axis, df.columns)
for i, result in enumerate(results):
if i > 0:
- self.assert_numpy_array_equal(result, results[0])
+ self.assert_numpy_array_equal(result.values,
+ results[0].values)
# DataFrame with itself, pairwise=False
for f in [lambda x: x.expanding().cov(pairwise=False),
@@ -2627,11 +2624,12 @@ def test_pairwise_stats_column_names_order(self):
lambda x: x.ewm(com=3).corr(pairwise=False), ]:
results = [f(df) for df in df1s]
for (df, result) in zip(df1s, results):
- assert_index_equal(result.index, df.index)
- assert_index_equal(result.columns, df.columns)
+ tm.assert_index_equal(result.index, df.index)
+ tm.assert_index_equal(result.columns, df.columns)
for i, result in enumerate(results):
if i > 0:
- self.assert_numpy_array_equal(result, results[0])
+ self.assert_numpy_array_equal(result.values,
+ results[0].values)
# DataFrame with another DataFrame, pairwise=True
for f in [lambda x, y: x.expanding().cov(y, pairwise=True),
@@ -2642,12 +2640,13 @@ def test_pairwise_stats_column_names_order(self):
lambda x, y: x.ewm(com=3).corr(y, pairwise=True), ]:
results = [f(df, df2) for df in df1s]
for (df, result) in zip(df1s, results):
- assert_index_equal(result.items, df.index)
- assert_index_equal(result.major_axis, df.columns)
- assert_index_equal(result.minor_axis, df2.columns)
+ tm.assert_index_equal(result.items, df.index)
+ tm.assert_index_equal(result.major_axis, df.columns)
+ tm.assert_index_equal(result.minor_axis, df2.columns)
for i, result in enumerate(results):
if i > 0:
- self.assert_numpy_array_equal(result, results[0])
+ self.assert_numpy_array_equal(result.values,
+ results[0].values)
# DataFrame with another DataFrame, pairwise=False
for f in [lambda x, y: x.expanding().cov(y, pairwise=False),
@@ -2662,8 +2661,8 @@ def test_pairwise_stats_column_names_order(self):
if result is not None:
expected_index = df.index.union(df2.index)
expected_columns = df.columns.union(df2.columns)
- assert_index_equal(result.index, expected_index)
- assert_index_equal(result.columns, expected_columns)
+ tm.assert_index_equal(result.index, expected_index)
+ tm.assert_index_equal(result.columns, expected_columns)
else:
tm.assertRaisesRegexp(
ValueError, "'arg1' columns are not unique", f, df,
@@ -2681,11 +2680,12 @@ def test_pairwise_stats_column_names_order(self):
lambda x, y: x.ewm(com=3).corr(y), ]:
results = [f(df, s) for df in df1s] + [f(s, df) for df in df1s]
for (df, result) in zip(df1s, results):
- assert_index_equal(result.index, df.index)
- assert_index_equal(result.columns, df.columns)
+ tm.assert_index_equal(result.index, df.index)
+ tm.assert_index_equal(result.columns, df.columns)
for i, result in enumerate(results):
if i > 0:
- self.assert_numpy_array_equal(result, results[0])
+ self.assert_numpy_array_equal(result.values,
+ results[0].values)
def test_rolling_skew_edge_cases(self):
@@ -2694,19 +2694,19 @@ def test_rolling_skew_edge_cases(self):
# yields all NaN (0 variance)
d = Series([1] * 5)
x = d.rolling(window=5).skew()
- assert_series_equal(all_nan, x)
+ tm.assert_series_equal(all_nan, x)
# yields all NaN (window too small)
d = Series(np.random.randn(5))
x = d.rolling(window=2).skew()
- assert_series_equal(all_nan, x)
+ tm.assert_series_equal(all_nan, x)
# yields [NaN, NaN, NaN, 0.177994, 1.548824]
d = Series([-1.50837035, -0.1297039, 0.19501095, 1.73508164, 0.41941401
])
expected = Series([np.NaN, np.NaN, np.NaN, 0.177994, 1.548824])
x = d.rolling(window=4).skew()
- assert_series_equal(expected, x)
+ tm.assert_series_equal(expected, x)
def test_rolling_kurt_edge_cases(self):
@@ -2715,25 +2715,25 @@ def test_rolling_kurt_edge_cases(self):
# yields all NaN (0 variance)
d = Series([1] * 5)
x = d.rolling(window=5).kurt()
- assert_series_equal(all_nan, x)
+ tm.assert_series_equal(all_nan, x)
# yields all NaN (window too small)
d = Series(np.random.randn(5))
x = d.rolling(window=3).kurt()
- assert_series_equal(all_nan, x)
+ tm.assert_series_equal(all_nan, x)
# yields [NaN, NaN, NaN, 1.224307, 2.671499]
d = Series([-1.50837035, -0.1297039, 0.19501095, 1.73508164, 0.41941401
])
expected = Series([np.NaN, np.NaN, np.NaN, 1.224307, 2.671499])
x = d.rolling(window=4).kurt()
- assert_series_equal(expected, x)
+ tm.assert_series_equal(expected, x)
def _check_expanding_ndarray(self, func, static_comp, has_min_periods=True,
has_time_rule=True, preserve_nan=True):
result = func(self.arr)
- assert_almost_equal(result[10], static_comp(self.arr[:11]))
+ tm.assert_almost_equal(result[10], static_comp(self.arr[:11]))
if preserve_nan:
assert (np.isnan(result[self._nan_locs]).all())
@@ -2743,7 +2743,7 @@ def _check_expanding_ndarray(self, func, static_comp, has_min_periods=True,
if has_min_periods:
result = func(arr, min_periods=30)
assert (np.isnan(result[:29]).all())
- assert_almost_equal(result[-1], static_comp(arr[:50]))
+ tm.assert_almost_equal(result[-1], static_comp(arr[:50]))
# min_periods is working correctly
result = func(arr, min_periods=15)
@@ -2758,10 +2758,10 @@ def _check_expanding_ndarray(self, func, static_comp, has_min_periods=True,
# min_periods=0
result0 = func(arr, min_periods=0)
result1 = func(arr, min_periods=1)
- assert_almost_equal(result0, result1)
+ tm.assert_almost_equal(result0, result1)
else:
result = func(arr)
- assert_almost_equal(result[-1], static_comp(arr[:50]))
+ tm.assert_almost_equal(result[-1], static_comp(arr[:50]))
def _check_expanding_structures(self, func):
series_result = func(self.series)
@@ -2795,7 +2795,7 @@ def test_rolling_max_gh6297(self):
index=[datetime(1975, 1, i, 0) for i in range(1, 6)])
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
x = series.rolling(window=1, freq='D').max()
- assert_series_equal(expected, x)
+ tm.assert_series_equal(expected, x)
def test_rolling_max_how_resample(self):
@@ -2814,14 +2814,14 @@ def test_rolling_max_how_resample(self):
index=[datetime(1975, 1, i, 0) for i in range(1, 6)])
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
x = series.rolling(window=1, freq='D').max()
- assert_series_equal(expected, x)
+ tm.assert_series_equal(expected, x)
# Now specify median (10.0)
expected = Series([0.0, 1.0, 2.0, 3.0, 10.0],
index=[datetime(1975, 1, i, 0) for i in range(1, 6)])
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
x = series.rolling(window=1, freq='D').max(how='median')
- assert_series_equal(expected, x)
+ tm.assert_series_equal(expected, x)
# Now specify mean (4+10+20)/3
v = (4.0 + 10.0 + 20.0) / 3.0
@@ -2829,7 +2829,7 @@ def test_rolling_max_how_resample(self):
index=[datetime(1975, 1, i, 0) for i in range(1, 6)])
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
x = series.rolling(window=1, freq='D').max(how='mean')
- assert_series_equal(expected, x)
+ tm.assert_series_equal(expected, x)
def test_rolling_min_how_resample(self):
@@ -2848,7 +2848,7 @@ def test_rolling_min_how_resample(self):
index=[datetime(1975, 1, i, 0) for i in range(1, 6)])
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
r = series.rolling(window=1, freq='D')
- assert_series_equal(expected, r.min())
+ tm.assert_series_equal(expected, r.min())
def test_rolling_median_how_resample(self):
@@ -2867,7 +2867,7 @@ def test_rolling_median_how_resample(self):
index=[datetime(1975, 1, i, 0) for i in range(1, 6)])
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
x = series.rolling(window=1, freq='D').median()
- assert_series_equal(expected, x)
+ tm.assert_series_equal(expected, x)
def test_rolling_median_memory_error(self):
# GH11722
@@ -2917,16 +2917,16 @@ def test_getitem(self):
expected = g_mutated.B.apply(lambda x: x.rolling(2).mean())
result = g.rolling(2).mean().B
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
result = g.rolling(2).B.mean()
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
result = g.B.rolling(2).mean()
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
result = self.frame.B.groupby(self.frame.A).rolling(2).mean()
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_getitem_multiple(self):
@@ -2937,10 +2937,10 @@ def test_getitem_multiple(self):
expected = g_mutated.B.apply(lambda x: x.rolling(2).count())
result = r.B.count()
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
result = r.B.count()
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_rolling(self):
g = self.frame.groupby('A')
@@ -2949,16 +2949,16 @@ def test_rolling(self):
for f in ['sum', 'mean', 'min', 'max', 'count', 'kurt', 'skew']:
result = getattr(r, f)()
expected = g.apply(lambda x: getattr(x.rolling(4), f)())
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
for f in ['std', 'var']:
result = getattr(r, f)(ddof=1)
expected = g.apply(lambda x: getattr(x.rolling(4), f)(ddof=1))
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = r.quantile(0.5)
expected = g.apply(lambda x: x.rolling(4).quantile(0.5))
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_rolling_corr_cov(self):
g = self.frame.groupby('A')
@@ -2970,14 +2970,14 @@ def test_rolling_corr_cov(self):
def func(x):
return getattr(x.rolling(4), f)(self.frame)
expected = g.apply(func)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = getattr(r.B, f)(pairwise=True)
def func(x):
return getattr(x.B.rolling(4), f)(pairwise=True)
expected = g.apply(func)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_rolling_apply(self):
g = self.frame.groupby('A')
@@ -2986,7 +2986,7 @@ def test_rolling_apply(self):
# reduction
result = r.apply(lambda x: x.sum())
expected = g.apply(lambda x: x.rolling(4).apply(lambda y: y.sum()))
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_expanding(self):
g = self.frame.groupby('A')
@@ -2995,16 +2995,16 @@ def test_expanding(self):
for f in ['sum', 'mean', 'min', 'max', 'count', 'kurt', 'skew']:
result = getattr(r, f)()
expected = g.apply(lambda x: getattr(x.expanding(), f)())
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
for f in ['std', 'var']:
result = getattr(r, f)(ddof=0)
expected = g.apply(lambda x: getattr(x.expanding(), f)(ddof=0))
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = r.quantile(0.5)
expected = g.apply(lambda x: x.expanding().quantile(0.5))
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
def test_expanding_corr_cov(self):
g = self.frame.groupby('A')
@@ -3016,14 +3016,14 @@ def test_expanding_corr_cov(self):
def func(x):
return getattr(x.expanding(), f)(self.frame)
expected = g.apply(func)
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
result = getattr(r.B, f)(pairwise=True)
def func(x):
return getattr(x.B.expanding(), f)(pairwise=True)
expected = g.apply(func)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_expanding_apply(self):
g = self.frame.groupby('A')
@@ -3032,4 +3032,4 @@ def test_expanding_apply(self):
# reduction
result = r.apply(lambda x: x.sum())
expected = g.apply(lambda x: x.expanding().apply(lambda y: y.sum()))
- assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tools/tests/test_concat.py b/pandas/tools/tests/test_concat.py
index 62bd12130ca53..9d9b0635e0f35 100644
--- a/pandas/tools/tests/test_concat.py
+++ b/pandas/tools/tests/test_concat.py
@@ -266,7 +266,8 @@ def test_concat_keys_specific_levels(self):
levels=[level],
names=['group_key'])
- self.assert_numpy_array_equal(result.columns.levels[0], level)
+ self.assert_index_equal(result.columns.levels[0],
+ Index(level, name='group_key'))
self.assertEqual(result.columns.names[0], 'group_key')
def test_concat_dataframe_keys_bug(self):
@@ -413,7 +414,8 @@ def test_concat_keys_and_levels(self):
('baz', 'one'), ('baz', 'two')],
names=['first', 'second'])
self.assertEqual(result.index.names, ('first', 'second') + (None,))
- self.assert_numpy_array_equal(result.index.levels[0], ['baz', 'foo'])
+ self.assert_index_equal(result.index.levels[0],
+ Index(['baz', 'foo'], name='first'))
def test_concat_keys_levels_no_overlap(self):
# GH #1406
diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py
index efbe4c17ea544..2505309768997 100644
--- a/pandas/tools/tests/test_merge.py
+++ b/pandas/tools/tests/test_merge.py
@@ -200,8 +200,10 @@ def test_join_on(self):
source = self.source
merged = target.join(source, on='C')
- self.assert_numpy_array_equal(merged['MergedA'], target['A'])
- self.assert_numpy_array_equal(merged['MergedD'], target['D'])
+ self.assert_series_equal(merged['MergedA'], target['A'],
+ check_names=False)
+ self.assert_series_equal(merged['MergedD'], target['D'],
+ check_names=False)
# join with duplicates (fix regression from DataFrame/Matrix merge)
df = DataFrame({'key': ['a', 'a', 'b', 'b', 'c']})
@@ -286,7 +288,7 @@ def test_join_with_len0(self):
merged2 = self.target.join(self.source.reindex([]), on='C',
how='inner')
- self.assertTrue(merged2.columns.equals(merged.columns))
+ self.assert_index_equal(merged2.columns, merged.columns)
self.assertEqual(len(merged2), 0)
def test_join_on_inner(self):
@@ -297,9 +299,11 @@ def test_join_on_inner(self):
expected = df.join(df2, on='key')
expected = expected[expected['value'].notnull()]
- self.assert_numpy_array_equal(joined['key'], expected['key'])
- self.assert_numpy_array_equal(joined['value'], expected['value'])
- self.assertTrue(joined.index.equals(expected.index))
+ self.assert_series_equal(joined['key'], expected['key'],
+ check_dtype=False)
+ self.assert_series_equal(joined['value'], expected['value'],
+ check_dtype=False)
+ self.assert_index_equal(joined.index, expected.index)
def test_join_on_singlekey_list(self):
df = DataFrame({'key': ['a', 'a', 'b', 'b', 'c']})
@@ -662,7 +666,7 @@ def test_join_sort(self):
# smoke test
joined = left.join(right, on='key', sort=False)
- self.assert_numpy_array_equal(joined.index, lrange(4))
+ self.assert_index_equal(joined.index, pd.Index(lrange(4)))
def test_intelligently_handle_join_key(self):
# #733, be a bit more 1337 about not returning unconsolidated DataFrame
@@ -722,15 +726,16 @@ def test_handle_join_key_pass_array(self):
rkey = np.array([1, 1, 2, 3, 4, 5])
merged = merge(left, right, left_on=lkey, right_on=rkey, how='outer')
- self.assert_numpy_array_equal(merged['key_0'],
- np.array([1, 1, 1, 1, 2, 2, 3, 4, 5]))
+ self.assert_series_equal(merged['key_0'],
+ Series([1, 1, 1, 1, 2, 2, 3, 4, 5],
+ name='key_0'))
left = DataFrame({'value': lrange(3)})
right = DataFrame({'rvalue': lrange(6)})
- key = np.array([0, 1, 1, 2, 2, 3])
+ key = np.array([0, 1, 1, 2, 2, 3], dtype=np.int64)
merged = merge(left, right, left_index=True, right_on=key, how='outer')
- self.assert_numpy_array_equal(merged['key_0'], key)
+ self.assert_series_equal(merged['key_0'], Series(key, name='key_0'))
def test_mixed_type_join_with_suffix(self):
# GH #916
diff --git a/pandas/tools/tests/test_tile.py b/pandas/tools/tests/test_tile.py
index 55f27e1466a92..0b91fd1ef1c02 100644
--- a/pandas/tools/tests/test_tile.py
+++ b/pandas/tools/tests/test_tile.py
@@ -4,7 +4,7 @@
import numpy as np
from pandas.compat import zip
-from pandas import Series
+from pandas import Series, Index
import pandas.util.testing as tm
from pandas.util.testing import assertRaisesRegexp
import pandas.core.common as com
@@ -19,32 +19,41 @@ class TestCut(tm.TestCase):
def test_simple(self):
data = np.ones(5)
result = cut(data, 4, labels=False)
- desired = [1, 1, 1, 1, 1]
+ desired = np.array([1, 1, 1, 1, 1], dtype=np.int64)
tm.assert_numpy_array_equal(result, desired)
def test_bins(self):
data = np.array([.2, 1.4, 2.5, 6.2, 9.7, 2.1])
result, bins = cut(data, 3, retbins=True)
- tm.assert_numpy_array_equal(result.codes, [0, 0, 0, 1, 2, 0])
- tm.assert_almost_equal(bins, [0.1905, 3.36666667, 6.53333333, 9.7])
+
+ exp_codes = np.array([0, 0, 0, 1, 2, 0], dtype=np.int8)
+ tm.assert_numpy_array_equal(result.codes, exp_codes)
+ exp = np.array([0.1905, 3.36666667, 6.53333333, 9.7])
+ tm.assert_almost_equal(bins, exp)
def test_right(self):
data = np.array([.2, 1.4, 2.5, 6.2, 9.7, 2.1, 2.575])
result, bins = cut(data, 4, right=True, retbins=True)
- tm.assert_numpy_array_equal(result.codes, [0, 0, 0, 2, 3, 0, 0])
- tm.assert_almost_equal(bins, [0.1905, 2.575, 4.95, 7.325, 9.7])
+ exp_codes = np.array([0, 0, 0, 2, 3, 0, 0], dtype=np.int8)
+ tm.assert_numpy_array_equal(result.codes, exp_codes)
+ exp = np.array([0.1905, 2.575, 4.95, 7.325, 9.7])
+ tm.assert_numpy_array_equal(bins, exp)
def test_noright(self):
data = np.array([.2, 1.4, 2.5, 6.2, 9.7, 2.1, 2.575])
result, bins = cut(data, 4, right=False, retbins=True)
- tm.assert_numpy_array_equal(result.codes, [0, 0, 0, 2, 3, 0, 1])
- tm.assert_almost_equal(bins, [0.2, 2.575, 4.95, 7.325, 9.7095])
+ exp_codes = np.array([0, 0, 0, 2, 3, 0, 1], dtype=np.int8)
+ tm.assert_numpy_array_equal(result.codes, exp_codes)
+ exp = np.array([0.2, 2.575, 4.95, 7.325, 9.7095])
+ tm.assert_almost_equal(bins, exp)
def test_arraylike(self):
data = [.2, 1.4, 2.5, 6.2, 9.7, 2.1]
result, bins = cut(data, 3, retbins=True)
- tm.assert_numpy_array_equal(result.codes, [0, 0, 0, 1, 2, 0])
- tm.assert_almost_equal(bins, [0.1905, 3.36666667, 6.53333333, 9.7])
+ exp_codes = np.array([0, 0, 0, 1, 2, 0], dtype=np.int8)
+ tm.assert_numpy_array_equal(result.codes, exp_codes)
+ exp = np.array([0.1905, 3.36666667, 6.53333333, 9.7])
+ tm.assert_almost_equal(bins, exp)
def test_bins_not_monotonic(self):
data = [.2, 1.4, 2.5, 6.2, 9.7, 2.1]
@@ -72,14 +81,14 @@ def test_labels(self):
arr = np.tile(np.arange(0, 1.01, 0.1), 4)
result, bins = cut(arr, 4, retbins=True)
- ex_levels = ['(-0.001, 0.25]', '(0.25, 0.5]', '(0.5, 0.75]',
- '(0.75, 1]']
- self.assert_numpy_array_equal(result.categories, ex_levels)
+ ex_levels = Index(['(-0.001, 0.25]', '(0.25, 0.5]', '(0.5, 0.75]',
+ '(0.75, 1]'])
+ self.assert_index_equal(result.categories, ex_levels)
result, bins = cut(arr, 4, retbins=True, right=False)
- ex_levels = ['[0, 0.25)', '[0.25, 0.5)', '[0.5, 0.75)',
- '[0.75, 1.001)']
- self.assert_numpy_array_equal(result.categories, ex_levels)
+ ex_levels = Index(['[0, 0.25)', '[0.25, 0.5)', '[0.5, 0.75)',
+ '[0.75, 1.001)'])
+ self.assert_index_equal(result.categories, ex_levels)
def test_cut_pass_series_name_to_factor(self):
s = Series(np.random.randn(100), name='foo')
@@ -91,9 +100,9 @@ def test_label_precision(self):
arr = np.arange(0, 0.73, 0.01)
result = cut(arr, 4, precision=2)
- ex_levels = ['(-0.00072, 0.18]', '(0.18, 0.36]', '(0.36, 0.54]',
- '(0.54, 0.72]']
- self.assert_numpy_array_equal(result.categories, ex_levels)
+ ex_levels = Index(['(-0.00072, 0.18]', '(0.18, 0.36]',
+ '(0.36, 0.54]', '(0.54, 0.72]'])
+ self.assert_index_equal(result.categories, ex_levels)
def test_na_handling(self):
arr = np.arange(0, 0.75, 0.01)
@@ -118,10 +127,10 @@ def test_inf_handling(self):
result = cut(data, [-np.inf, 2, 4, np.inf])
result_ser = cut(data_ser, [-np.inf, 2, 4, np.inf])
- ex_categories = ['(-inf, 2]', '(2, 4]', '(4, inf]']
+ ex_categories = Index(['(-inf, 2]', '(2, 4]', '(4, inf]'])
- tm.assert_numpy_array_equal(result.categories, ex_categories)
- tm.assert_numpy_array_equal(result_ser.cat.categories, ex_categories)
+ tm.assert_index_equal(result.categories, ex_categories)
+ tm.assert_index_equal(result_ser.cat.categories, ex_categories)
self.assertEqual(result[5], '(4, inf]')
self.assertEqual(result[0], '(-inf, 2]')
self.assertEqual(result_ser[5], '(4, inf]')
@@ -135,7 +144,7 @@ def test_qcut(self):
tm.assert_almost_equal(bins, ex_bins)
ex_levels = cut(arr, ex_bins, include_lowest=True)
- self.assert_numpy_array_equal(labels, ex_levels)
+ self.assert_categorical_equal(labels, ex_levels)
def test_qcut_bounds(self):
arr = np.random.randn(1000)
@@ -148,7 +157,7 @@ def test_qcut_specify_quantiles(self):
factor = qcut(arr, [0, .25, .5, .75, 1.])
expected = qcut(arr, 4)
- self.assertTrue(factor.equals(expected))
+ tm.assert_categorical_equal(factor, expected)
def test_qcut_all_bins_same(self):
assertRaisesRegexp(ValueError, "edges.*unique", qcut,
@@ -173,7 +182,7 @@ def test_cut_pass_labels(self):
exp = cut(arr, bins)
exp.categories = labels
- self.assertTrue(result.equals(exp))
+ tm.assert_categorical_equal(result, exp)
def test_qcut_include_lowest(self):
values = np.arange(10)
@@ -253,12 +262,14 @@ def test_series_retbins(self):
# GH 8589
s = Series(np.arange(4))
result, bins = cut(s, 2, retbins=True)
- tm.assert_numpy_array_equal(result.cat.codes.values, [0, 0, 1, 1])
- tm.assert_almost_equal(bins, [-0.003, 1.5, 3])
+ tm.assert_numpy_array_equal(result.cat.codes.values,
+ np.array([0, 0, 1, 1], dtype=np.int8))
+ tm.assert_numpy_array_equal(bins, np.array([-0.003, 1.5, 3]))
result, bins = qcut(s, 2, retbins=True)
- tm.assert_numpy_array_equal(result.cat.codes.values, [0, 0, 1, 1])
- tm.assert_almost_equal(bins, [0, 1.5, 3])
+ tm.assert_numpy_array_equal(result.cat.codes.values,
+ np.array([0, 0, 1, 1], dtype=np.int8))
+ tm.assert_numpy_array_equal(bins, np.array([0, 1.5, 3]))
def curpath():
diff --git a/pandas/tools/tests/test_util.py b/pandas/tools/tests/test_util.py
index 92a41199f264d..4e704554f982f 100644
--- a/pandas/tools/tests/test_util.py
+++ b/pandas/tools/tests/test_util.py
@@ -18,18 +18,21 @@ class TestCartesianProduct(tm.TestCase):
def test_simple(self):
x, y = list('ABC'), [1, 22]
- result = cartesian_product([x, y])
- expected = [np.array(['A', 'A', 'B', 'B', 'C', 'C']),
- np.array([1, 22, 1, 22, 1, 22])]
- tm.assert_numpy_array_equal(result, expected)
+ result1, result2 = cartesian_product([x, y])
+ expected1 = np.array(['A', 'A', 'B', 'B', 'C', 'C'])
+ expected2 = np.array([1, 22, 1, 22, 1, 22])
+ tm.assert_numpy_array_equal(result1, expected1)
+ tm.assert_numpy_array_equal(result2, expected2)
def test_datetimeindex(self):
# regression test for GitHub issue #6439
# make sure that the ordering on datetimeindex is consistent
x = date_range('2000-01-01', periods=2)
- result = [Index(y).day for y in cartesian_product([x, x])]
- expected = [np.array([1, 1, 2, 2]), np.array([1, 2, 1, 2])]
- tm.assert_numpy_array_equal(result, expected)
+ result1, result2 = [Index(y).day for y in cartesian_product([x, x])]
+ expected1 = np.array([1, 1, 2, 2], dtype=np.int32)
+ expected2 = np.array([1, 2, 1, 2], dtype=np.int32)
+ tm.assert_numpy_array_equal(result1, expected1)
+ tm.assert_numpy_array_equal(result2, expected2)
class TestLocaleUtils(tm.TestCase):
diff --git a/pandas/tseries/tests/test_base.py b/pandas/tseries/tests/test_base.py
index 97b551070f541..7077a23d5abcb 100644
--- a/pandas/tseries/tests/test_base.py
+++ b/pandas/tseries/tests/test_base.py
@@ -62,7 +62,7 @@ def test_asobject_tolist(self):
self.assertTrue(isinstance(result, Index))
self.assertEqual(result.dtype, object)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(idx.tolist(), expected_list)
@@ -76,7 +76,7 @@ def test_asobject_tolist(self):
result = idx.asobject
self.assertTrue(isinstance(result, Index))
self.assertEqual(result.dtype, object)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(idx.tolist(), expected_list)
@@ -89,7 +89,7 @@ def test_asobject_tolist(self):
result = idx.asobject
self.assertTrue(isinstance(result, Index))
self.assertEqual(result.dtype, object)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(idx.tolist(), expected_list)
@@ -726,7 +726,7 @@ def test_asobject_tolist(self):
self.assertTrue(isinstance(result, Index))
self.assertEqual(result.dtype, object)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(idx.tolist(), expected_list)
@@ -738,7 +738,7 @@ def test_asobject_tolist(self):
result = idx.asobject
self.assertTrue(isinstance(result, Index))
self.assertEqual(result.dtype, object)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(idx.tolist(), expected_list)
@@ -1489,7 +1489,7 @@ def test_asobject_tolist(self):
result = idx.asobject
self.assertTrue(isinstance(result, Index))
self.assertEqual(result.dtype, object)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(idx.tolist(), expected_list)
diff --git a/pandas/tseries/tests/test_daterange.py b/pandas/tseries/tests/test_daterange.py
index 6e572289a3cae..6ad33b6b973de 100644
--- a/pandas/tseries/tests/test_daterange.py
+++ b/pandas/tseries/tests/test_daterange.py
@@ -25,15 +25,16 @@ def eq_gen_range(kwargs, expected):
class TestGenRangeGeneration(tm.TestCase):
+
def test_generate(self):
rng1 = list(generate_range(START, END, offset=datetools.bday))
rng2 = list(generate_range(START, END, time_rule='B'))
- self.assert_numpy_array_equal(rng1, rng2)
+ self.assertEqual(rng1, rng2)
def test_generate_cday(self):
rng1 = list(generate_range(START, END, offset=datetools.cday))
rng2 = list(generate_range(START, END, time_rule='C'))
- self.assert_numpy_array_equal(rng1, rng2)
+ self.assertEqual(rng1, rng2)
def test_1(self):
eq_gen_range(dict(start=datetime(2009, 3, 25), periods=2),
@@ -68,8 +69,8 @@ def test_precision_finer_than_offset(self):
freq='Q-DEC', tz=None)
expected2 = DatetimeIndex(expected2_list, dtype='datetime64[ns]',
freq='W-SUN', tz=None)
- self.assertTrue(result1.equals(expected1))
- self.assertTrue(result2.equals(expected2))
+ self.assert_index_equal(result1, expected1)
+ self.assert_index_equal(result2, expected2)
class TestDateRange(tm.TestCase):
@@ -140,7 +141,7 @@ def test_comparison(self):
def test_copy(self):
cp = self.rng.copy()
repr(cp)
- self.assertTrue(cp.equals(self.rng))
+ self.assert_index_equal(cp, self.rng)
def test_repr(self):
# only really care that it works
@@ -148,7 +149,9 @@ def test_repr(self):
def test_getitem(self):
smaller = self.rng[:5]
- self.assert_numpy_array_equal(smaller, self.rng.view(np.ndarray)[:5])
+ exp = DatetimeIndex(self.rng.view(np.ndarray)[:5])
+ self.assert_index_equal(smaller, exp)
+
self.assertEqual(smaller.offset, self.rng.offset)
sliced = self.rng[::5]
@@ -211,7 +214,7 @@ def test_union(self):
tm.assertIsInstance(the_union, DatetimeIndex)
# order does not matter
- self.assert_numpy_array_equal(right.union(left), the_union)
+ tm.assert_index_equal(right.union(left), the_union)
# overlapping, but different offset
rng = date_range(START, END, freq=datetools.bmonthEnd)
@@ -256,13 +259,13 @@ def test_union_not_cacheable(self):
rng1 = rng[10:]
rng2 = rng[:25]
the_union = rng1.union(rng2)
- self.assertTrue(the_union.equals(rng))
+ self.assert_index_equal(the_union, rng)
rng1 = rng[10:]
rng2 = rng[15:35]
the_union = rng1.union(rng2)
expected = rng[10:]
- self.assertTrue(the_union.equals(expected))
+ self.assert_index_equal(the_union, expected)
def test_intersection(self):
rng = date_range('1/1/2000', periods=50, freq=datetools.Minute())
@@ -270,24 +273,24 @@ def test_intersection(self):
rng2 = rng[:25]
the_int = rng1.intersection(rng2)
expected = rng[10:25]
- self.assertTrue(the_int.equals(expected))
+ self.assert_index_equal(the_int, expected)
tm.assertIsInstance(the_int, DatetimeIndex)
self.assertEqual(the_int.offset, rng.offset)
the_int = rng1.intersection(rng2.view(DatetimeIndex))
- self.assertTrue(the_int.equals(expected))
+ self.assert_index_equal(the_int, expected)
# non-overlapping
the_int = rng[:10].intersection(rng[10:])
expected = DatetimeIndex([])
- self.assertTrue(the_int.equals(expected))
+ self.assert_index_equal(the_int, expected)
def test_intersection_bug(self):
# GH #771
a = bdate_range('11/30/2011', '12/31/2011')
b = bdate_range('12/10/2011', '12/20/2011')
result = a.intersection(b)
- self.assertTrue(result.equals(b))
+ self.assert_index_equal(result, b)
def test_summary(self):
self.rng.summary()
@@ -364,7 +367,7 @@ def test_range_bug(self):
start = datetime(2011, 1, 1)
exp_values = [start + i * offset for i in range(5)]
- self.assert_numpy_array_equal(result, DatetimeIndex(exp_values))
+ tm.assert_index_equal(result, DatetimeIndex(exp_values))
def test_range_tz_pytz(self):
# GH 2906
@@ -494,8 +497,8 @@ def test_range_closed(self):
if begin == closed[0]:
expected_right = closed[1:]
- self.assertTrue(expected_left.equals(left))
- self.assertTrue(expected_right.equals(right))
+ self.assert_index_equal(expected_left, left)
+ self.assert_index_equal(expected_right, right)
def test_range_closed_with_tz_aware_start_end(self):
# GH12409
@@ -514,8 +517,8 @@ def test_range_closed_with_tz_aware_start_end(self):
if begin == closed[0]:
expected_right = closed[1:]
- self.assertTrue(expected_left.equals(left))
- self.assertTrue(expected_right.equals(right))
+ self.assert_index_equal(expected_left, left)
+ self.assert_index_equal(expected_right, right)
# test with default frequency, UTC
begin = Timestamp('2011/1/1', tz='UTC')
@@ -546,9 +549,9 @@ def test_range_closed_boundary(self):
expected_right = both_boundary[1:]
expected_left = both_boundary[:-1]
- self.assertTrue(right_boundary.equals(expected_right))
- self.assertTrue(left_boundary.equals(expected_left))
- self.assertTrue(both_boundary.equals(expected_both))
+ self.assert_index_equal(right_boundary, expected_right)
+ self.assert_index_equal(left_boundary, expected_left)
+ self.assert_index_equal(both_boundary, expected_both)
def test_years_only(self):
# GH 6961
@@ -570,8 +573,8 @@ def test_freq_divides_end_in_nanos(self):
'2005-01-13 15:45:00'],
dtype='datetime64[ns]', freq='345T',
tz=None)
- self.assertTrue(result_1.equals(expected_1))
- self.assertTrue(result_2.equals(expected_2))
+ self.assert_index_equal(result_1, expected_1)
+ self.assert_index_equal(result_2, expected_2)
class TestCustomDateRange(tm.TestCase):
@@ -613,7 +616,7 @@ def test_comparison(self):
def test_copy(self):
cp = self.rng.copy()
repr(cp)
- self.assertTrue(cp.equals(self.rng))
+ self.assert_index_equal(cp, self.rng)
def test_repr(self):
# only really care that it works
@@ -621,7 +624,8 @@ def test_repr(self):
def test_getitem(self):
smaller = self.rng[:5]
- self.assert_numpy_array_equal(smaller, self.rng.view(np.ndarray)[:5])
+ exp = DatetimeIndex(self.rng.view(np.ndarray)[:5])
+ self.assert_index_equal(smaller, exp)
self.assertEqual(smaller.offset, self.rng.offset)
sliced = self.rng[::5]
@@ -686,7 +690,7 @@ def test_union(self):
tm.assertIsInstance(the_union, DatetimeIndex)
# order does not matter
- self.assert_numpy_array_equal(right.union(left), the_union)
+ self.assert_index_equal(right.union(left), the_union)
# overlapping, but different offset
rng = date_range(START, END, freq=datetools.bmonthEnd)
@@ -731,7 +735,7 @@ def test_intersection_bug(self):
a = cdate_range('11/30/2011', '12/31/2011')
b = cdate_range('12/10/2011', '12/20/2011')
result = a.intersection(b)
- self.assertTrue(result.equals(b))
+ self.assert_index_equal(result, b)
def test_summary(self):
self.rng.summary()
@@ -783,25 +787,25 @@ def test_daterange_bug_456(self):
def test_cdaterange(self):
rng = cdate_range('2013-05-01', periods=3)
xp = DatetimeIndex(['2013-05-01', '2013-05-02', '2013-05-03'])
- self.assertTrue(xp.equals(rng))
+ self.assert_index_equal(xp, rng)
def test_cdaterange_weekmask(self):
rng = cdate_range('2013-05-01', periods=3,
weekmask='Sun Mon Tue Wed Thu')
xp = DatetimeIndex(['2013-05-01', '2013-05-02', '2013-05-05'])
- self.assertTrue(xp.equals(rng))
+ self.assert_index_equal(xp, rng)
def test_cdaterange_holidays(self):
rng = cdate_range('2013-05-01', periods=3, holidays=['2013-05-01'])
xp = DatetimeIndex(['2013-05-02', '2013-05-03', '2013-05-06'])
- self.assertTrue(xp.equals(rng))
+ self.assert_index_equal(xp, rng)
def test_cdaterange_weekmask_and_holidays(self):
rng = cdate_range('2013-05-01', periods=3,
weekmask='Sun Mon Tue Wed Thu',
holidays=['2013-05-01'])
xp = DatetimeIndex(['2013-05-02', '2013-05-05', '2013-05-06'])
- self.assertTrue(xp.equals(rng))
+ self.assert_index_equal(xp, rng)
if __name__ == '__main__':
diff --git a/pandas/tseries/tests/test_offsets.py b/pandas/tseries/tests/test_offsets.py
index 0e91e396965fa..ec88acc421cdb 100644
--- a/pandas/tseries/tests/test_offsets.py
+++ b/pandas/tseries/tests/test_offsets.py
@@ -4551,7 +4551,7 @@ def test_all_offset_classes(self):
for offset, test_values in iteritems(tests):
first = Timestamp(test_values[0], tz='US/Eastern') + offset()
second = Timestamp(test_values[1], tz='US/Eastern')
- self.assertEqual(first, second, str(offset))
+ self.assertEqual(first, second, msg=str(offset))
if __name__ == '__main__':
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index b0df824f0a832..8e6d339b87623 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -26,8 +26,6 @@
from pandas import (Series, DataFrame,
_np_version_under1p9, _np_version_under1p12)
from pandas import tslib
-from pandas.util.testing import (assert_index_equal, assert_series_equal,
- assert_almost_equal, assertRaisesRegexp)
import pandas.util.testing as tm
@@ -1752,22 +1750,21 @@ def test_constructor_simple_new(self):
result = idx._simple_new(idx.astype('i8'), 'p', freq=idx.freq)
tm.assert_index_equal(result, idx)
- result = idx._simple_new(
- [pd.Period('2007-01', freq='M'), pd.Period('2007-02', freq='M')],
- 'p', freq=idx.freq)
- self.assertTrue(result.equals(idx))
+ result = idx._simple_new([pd.Period('2007-01', freq='M'),
+ pd.Period('2007-02', freq='M')],
+ 'p', freq=idx.freq)
+ self.assert_index_equal(result, idx)
- result = idx._simple_new(
- np.array([pd.Period('2007-01', freq='M'),
- pd.Period('2007-02', freq='M')]),
- 'p', freq=idx.freq)
- self.assertTrue(result.equals(idx))
+ result = idx._simple_new(np.array([pd.Period('2007-01', freq='M'),
+ pd.Period('2007-02', freq='M')]),
+ 'p', freq=idx.freq)
+ self.assert_index_equal(result, idx)
def test_constructor_simple_new_empty(self):
# GH13079
idx = PeriodIndex([], freq='M', name='p')
result = idx._simple_new(idx, name='p', freq='M')
- assert_index_equal(result, idx)
+ tm.assert_index_equal(result, idx)
def test_constructor_simple_new_floats(self):
# GH13079
@@ -1782,7 +1779,7 @@ def test_shallow_copy_empty(self):
result = idx._shallow_copy()
expected = idx
- assert_index_equal(result, expected)
+ tm.assert_index_equal(result, expected)
def test_constructor_nat(self):
self.assertRaises(ValueError, period_range, start='NaT',
@@ -1902,7 +1899,7 @@ def test_getitem_partial(self):
exp = result
result = ts[24:]
- assert_series_equal(exp, result)
+ tm.assert_series_equal(exp, result)
ts = ts[10:].append(ts[10:])
self.assertRaisesRegexp(KeyError,
@@ -1918,7 +1915,7 @@ def test_getitem_datetime(self):
dt4 = datetime(2012, 4, 20)
rs = ts[dt1:dt4]
- assert_series_equal(rs, ts)
+ tm.assert_series_equal(rs, ts)
def test_slice_with_negative_step(self):
ts = Series(np.arange(20),
@@ -1926,9 +1923,9 @@ def test_slice_with_negative_step(self):
SLC = pd.IndexSlice
def assert_slices_equivalent(l_slc, i_slc):
- assert_series_equal(ts[l_slc], ts.iloc[i_slc])
- assert_series_equal(ts.loc[l_slc], ts.iloc[i_slc])
- assert_series_equal(ts.ix[l_slc], ts.iloc[i_slc])
+ tm.assert_series_equal(ts[l_slc], ts.iloc[i_slc])
+ tm.assert_series_equal(ts.loc[l_slc], ts.iloc[i_slc])
+ tm.assert_series_equal(ts.ix[l_slc], ts.iloc[i_slc])
assert_slices_equivalent(SLC[Period('2014-10')::-1], SLC[9::-1])
assert_slices_equivalent(SLC['2014-10'::-1], SLC[9::-1])
@@ -2100,13 +2097,13 @@ def test_as_frame_columns(self):
df = DataFrame(randn(10, 5), columns=rng)
ts = df[rng[0]]
- assert_series_equal(ts, df.ix[:, 0])
+ tm.assert_series_equal(ts, df.ix[:, 0])
# GH # 1211
repr(df)
ts = df['1/1/2000']
- assert_series_equal(ts, df.ix[:, 0])
+ tm.assert_series_equal(ts, df.ix[:, 0])
def test_indexing(self):
@@ -2151,7 +2148,7 @@ def test_frame_to_time_stamp(self):
exp_index = date_range('1/1/2001', end='12/31/2009', freq='A-DEC')
result = df.to_timestamp('D', 'end')
tm.assert_index_equal(result.index, exp_index)
- assert_almost_equal(result.values, df.values)
+ tm.assert_numpy_array_equal(result.values, df.values)
exp_index = date_range('1/1/2001', end='1/1/2009', freq='AS-JAN')
result = df.to_timestamp('D', 'start')
@@ -2182,7 +2179,7 @@ def _get_with_delta(delta, freq='A-DEC'):
exp_index = date_range('1/1/2001', end='12/31/2009', freq='A-DEC')
result = df.to_timestamp('D', 'end', axis=1)
tm.assert_index_equal(result.columns, exp_index)
- assert_almost_equal(result.values, df.values)
+ tm.assert_numpy_array_equal(result.values, df.values)
exp_index = date_range('1/1/2001', end='1/1/2009', freq='AS-JAN')
result = df.to_timestamp('D', 'start', axis=1)
@@ -2204,7 +2201,7 @@ def _get_with_delta(delta, freq='A-DEC'):
tm.assert_index_equal(result.columns, exp_index)
# invalid axis
- assertRaisesRegexp(ValueError, 'axis', df.to_timestamp, axis=2)
+ tm.assertRaisesRegexp(ValueError, 'axis', df.to_timestamp, axis=2)
result1 = df.to_timestamp('5t', axis=1)
result2 = df.to_timestamp('t', axis=1)
@@ -2224,7 +2221,7 @@ def test_index_duplicate_periods(self):
result = ts[2007]
expected = ts[1:3]
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
result[:] = 1
self.assertTrue((ts[1:3] == 1).all())
@@ -2234,19 +2231,19 @@ def test_index_duplicate_periods(self):
result = ts[2007]
expected = ts[idx == 2007]
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
def test_index_unique(self):
idx = PeriodIndex([2000, 2007, 2007, 2009, 2009], freq='A-JUN')
expected = PeriodIndex([2000, 2007, 2009], freq='A-JUN')
- self.assert_numpy_array_equal(idx.unique(), expected.values)
+ self.assert_index_equal(idx.unique(), expected)
self.assertEqual(idx.nunique(), 3)
idx = PeriodIndex([2000, 2007, 2007, 2009, 2007], freq='A-JUN',
tz='US/Eastern')
expected = PeriodIndex([2000, 2007, 2009], freq='A-JUN',
tz='US/Eastern')
- self.assert_numpy_array_equal(idx.unique(), expected.values)
+ self.assert_index_equal(idx.unique(), expected)
self.assertEqual(idx.nunique(), 3)
def test_constructor(self):
@@ -2336,20 +2333,17 @@ def test_repeat(self):
Period('2001-01-02'), Period('2001-01-02'),
])
- assert_index_equal(index.repeat(2), expected)
+ tm.assert_index_equal(index.repeat(2), expected)
def test_numpy_repeat(self):
index = period_range('20010101', periods=2)
- expected = PeriodIndex([
- Period('2001-01-01'), Period('2001-01-01'),
- Period('2001-01-02'), Period('2001-01-02'),
- ])
+ expected = PeriodIndex([Period('2001-01-01'), Period('2001-01-01'),
+ Period('2001-01-02'), Period('2001-01-02')])
- assert_index_equal(np.repeat(index, 2), expected)
+ tm.assert_index_equal(np.repeat(index, 2), expected)
msg = "the 'axis' parameter is not supported"
- assertRaisesRegexp(ValueError, msg, np.repeat,
- index, 2, axis=1)
+ tm.assertRaisesRegexp(ValueError, msg, np.repeat, index, 2, axis=1)
def test_shift(self):
pi1 = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009')
@@ -2598,7 +2592,7 @@ def test_negative_ordinals(self):
idx1 = PeriodIndex(ordinal=[-1, 0, 1], freq='A')
idx2 = PeriodIndex(ordinal=np.array([-1, 0, 1]), freq='A')
- tm.assert_numpy_array_equal(idx1, idx2)
+ tm.assert_index_equal(idx1, idx2)
def test_dti_to_period(self):
dti = DatetimeIndex(start='1/1/2005', end='12/1/2005', freq='M')
@@ -2626,10 +2620,10 @@ def test_pindex_slice_index(self):
s = Series(np.random.rand(len(pi)), index=pi)
res = s['2010']
exp = s[0:12]
- assert_series_equal(res, exp)
+ tm.assert_series_equal(res, exp)
res = s['2011']
exp = s[12:24]
- assert_series_equal(res, exp)
+ tm.assert_series_equal(res, exp)
def test_getitem_day(self):
# GH 6716
@@ -2655,9 +2649,9 @@ def test_getitem_day(self):
continue
s = Series(np.random.rand(len(idx)), index=idx)
- assert_series_equal(s['2013/01'], s[0:31])
- assert_series_equal(s['2013/02'], s[31:59])
- assert_series_equal(s['2014'], s[365:])
+ tm.assert_series_equal(s['2013/01'], s[0:31])
+ tm.assert_series_equal(s['2013/02'], s[31:59])
+ tm.assert_series_equal(s['2014'], s[365:])
invalid = ['2013/02/01 9H', '2013/02/01 09:00']
for v in invalid:
@@ -2683,10 +2677,10 @@ def test_range_slice_day(self):
s = Series(np.random.rand(len(idx)), index=idx)
- assert_series_equal(s['2013/01/02':], s[1:])
- assert_series_equal(s['2013/01/02':'2013/01/05'], s[1:5])
- assert_series_equal(s['2013/02':], s[31:])
- assert_series_equal(s['2014':], s[365:])
+ tm.assert_series_equal(s['2013/01/02':], s[1:])
+ tm.assert_series_equal(s['2013/01/02':'2013/01/05'], s[1:5])
+ tm.assert_series_equal(s['2013/02':], s[31:])
+ tm.assert_series_equal(s['2014':], s[365:])
invalid = ['2013/02/01 9H', '2013/02/01 09:00']
for v in invalid:
@@ -2716,10 +2710,10 @@ def test_getitem_seconds(self):
continue
s = Series(np.random.rand(len(idx)), index=idx)
- assert_series_equal(s['2013/01/01 10:00'], s[3600:3660])
- assert_series_equal(s['2013/01/01 9H'], s[:3600])
+ tm.assert_series_equal(s['2013/01/01 10:00'], s[3600:3660])
+ tm.assert_series_equal(s['2013/01/01 9H'], s[:3600])
for d in ['2013/01/01', '2013/01', '2013']:
- assert_series_equal(s[d], s)
+ tm.assert_series_equal(s[d], s)
def test_range_slice_seconds(self):
# GH 6716
@@ -2741,14 +2735,14 @@ def test_range_slice_seconds(self):
s = Series(np.random.rand(len(idx)), index=idx)
- assert_series_equal(s['2013/01/01 09:05':'2013/01/01 09:10'],
- s[300:660])
- assert_series_equal(s['2013/01/01 10:00':'2013/01/01 10:05'],
- s[3600:3960])
- assert_series_equal(s['2013/01/01 10H':], s[3600:])
- assert_series_equal(s[:'2013/01/01 09:30'], s[:1860])
+ tm.assert_series_equal(s['2013/01/01 09:05':'2013/01/01 09:10'],
+ s[300:660])
+ tm.assert_series_equal(s['2013/01/01 10:00':'2013/01/01 10:05'],
+ s[3600:3960])
+ tm.assert_series_equal(s['2013/01/01 10H':], s[3600:])
+ tm.assert_series_equal(s[:'2013/01/01 09:30'], s[:1860])
for d in ['2013/01/01', '2013/01', '2013']:
- assert_series_equal(s[d:], s)
+ tm.assert_series_equal(s[d:], s)
def test_range_slice_outofbounds(self):
# GH 5407
@@ -2757,8 +2751,8 @@ def test_range_slice_outofbounds(self):
for idx in [didx, pidx]:
df = DataFrame(dict(units=[100 + i for i in range(10)]), index=idx)
- empty = DataFrame(index=idx.__class__(
- [], freq='D'), columns=['units'])
+ empty = DataFrame(index=idx.__class__([], freq='D'),
+ columns=['units'])
empty['units'] = empty['units'].astype('int64')
tm.assert_frame_equal(df['2013/09/01':'2013/09/30'], empty)
@@ -2949,16 +2943,16 @@ def test_align_series(self):
result = ts + ts[::2]
expected = ts + ts
expected[1::2] = np.nan
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
result = ts + _permute(ts[::2])
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
# it works!
for kind in ['inner', 'outer', 'left', 'right']:
ts.align(ts[::2], join=kind)
msg = "Input has different freq=D from PeriodIndex\\(freq=A-DEC\\)"
- with assertRaisesRegexp(period.IncompatibleFrequency, msg):
+ with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg):
ts + ts.asfreq('D', how="end")
def test_align_frame(self):
@@ -3158,7 +3152,7 @@ def test_map(self):
tm.assert_index_equal(result, expected)
result = index.map(lambda x: x.ordinal)
- exp = [x.ordinal for x in index]
+ exp = np.array([x.ordinal for x in index], dtype=np.int64)
tm.assert_numpy_array_equal(result, exp)
def test_map_with_string_constructor(self):
@@ -4231,19 +4225,19 @@ def test_constructor_cast_object(self):
def test_series_comparison_scalars(self):
val = pd.Period('2000-01-04', freq='D')
result = self.series > val
- expected = np.array([x > val for x in self.series])
- self.assert_numpy_array_equal(result, expected)
+ expected = pd.Series([x > val for x in self.series])
+ tm.assert_series_equal(result, expected)
val = self.series[5]
result = self.series > val
- expected = np.array([x > val for x in self.series])
- self.assert_numpy_array_equal(result, expected)
+ expected = pd.Series([x > val for x in self.series])
+ tm.assert_series_equal(result, expected)
def test_between(self):
left, right = self.series[[2, 7]]
result = self.series.between(left, right)
expected = (self.series >= left) & (self.series <= right)
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
# ---------------------------------------------------------------------
# NaT support
@@ -4262,7 +4256,7 @@ def test_NaT_scalar(self):
def test_NaT_cast(self):
result = Series([np.nan]).astype('period[D]')
expected = Series([NaT])
- assert_series_equal(result, expected)
+ tm.assert_series_equal(result, expected)
"""
def test_set_none_nan(self):
diff --git a/pandas/tseries/tests/test_plotting.py b/pandas/tseries/tests/test_plotting.py
index 67df62e1ebb57..2255f9fae73de 100644
--- a/pandas/tseries/tests/test_plotting.py
+++ b/pandas/tseries/tests/test_plotting.py
@@ -330,7 +330,7 @@ def test_dataframe(self):
bts = DataFrame({'a': tm.makeTimeSeries()})
ax = bts.plot()
idx = ax.get_lines()[0].get_xdata()
- tm.assert_numpy_array_equal(bts.index.to_period(), PeriodIndex(idx))
+ tm.assert_index_equal(bts.index.to_period(), PeriodIndex(idx))
@slow
def test_axis_limits(self):
@@ -1113,7 +1113,7 @@ def test_ax_plot(self):
fig = plt.figure()
ax = fig.add_subplot(111)
lines = ax.plot(x, y, label='Y')
- tm.assert_numpy_array_equal(DatetimeIndex(lines[0].get_xdata()), x)
+ tm.assert_index_equal(DatetimeIndex(lines[0].get_xdata()), x)
@slow
def test_mpl_nopandas(self):
diff --git a/pandas/tseries/tests/test_resample.py b/pandas/tseries/tests/test_resample.py
index 6b94c828bddc0..2236d20975eee 100644
--- a/pandas/tseries/tests/test_resample.py
+++ b/pandas/tseries/tests/test_resample.py
@@ -1418,7 +1418,7 @@ def test_resample_base(self):
resampled = ts.resample('5min', base=2).mean()
exp_rng = date_range('12/31/1999 23:57:00', '1/1/2000 01:57',
freq='5min')
- self.assertTrue(resampled.index.equals(exp_rng))
+ self.assert_index_equal(resampled.index, exp_rng)
def test_resample_base_with_timedeltaindex(self):
@@ -1432,8 +1432,8 @@ def test_resample_base_with_timedeltaindex(self):
exp_without_base = timedelta_range(start='0s', end='25s', freq='2s')
exp_with_base = timedelta_range(start='5s', end='29s', freq='2s')
- self.assertTrue(without_base.index.equals(exp_without_base))
- self.assertTrue(with_base.index.equals(exp_with_base))
+ self.assert_index_equal(without_base.index, exp_without_base)
+ self.assert_index_equal(with_base.index, exp_with_base)
def test_resample_categorical_data_with_timedeltaindex(self):
# GH #12169
@@ -1464,7 +1464,7 @@ def test_resample_to_period_monthly_buglet(self):
result = ts.resample('M', kind='period').mean()
exp_index = period_range('Jan-2000', 'Dec-2000', freq='M')
- self.assertTrue(result.index.equals(exp_index))
+ self.assert_index_equal(result.index, exp_index)
def test_period_with_agg(self):
@@ -1627,7 +1627,7 @@ def test_corner_cases(self):
result = ts.resample('5t', closed='right', label='left').mean()
ex_index = date_range('1999-12-31 23:55', periods=4, freq='5t')
- self.assertTrue(result.index.equals(ex_index))
+ self.assert_index_equal(result.index, ex_index)
len0pts = _simple_pts('2007-01', '2010-05', freq='M')[:0]
# it works
@@ -2391,7 +2391,7 @@ def test_closed_left_corner(self):
ex_index = date_range(start='1/1/2012 9:30', freq='10min', periods=3)
- self.assertTrue(result.index.equals(ex_index))
+ self.assert_index_equal(result.index, ex_index)
assert_series_equal(result, exp)
def test_quarterly_resampling(self):
@@ -2760,7 +2760,7 @@ def test_apply_iteration(self):
# it works!
result = grouped.apply(f)
- self.assertTrue(result.index.equals(df.index))
+ self.assert_index_equal(result.index, df.index)
def test_panel_aggregation(self):
ind = pd.date_range('1/1/2000', periods=100)
diff --git a/pandas/tseries/tests/test_timedeltas.py b/pandas/tseries/tests/test_timedeltas.py
index 20098488f7f1c..10276137b42a1 100644
--- a/pandas/tseries/tests/test_timedeltas.py
+++ b/pandas/tseries/tests/test_timedeltas.py
@@ -1223,7 +1223,7 @@ def test_total_seconds(self):
freq='s')
expt = [1 * 86400 + 10 * 3600 + 11 * 60 + 12 + 100123456. / 1e9,
1 * 86400 + 10 * 3600 + 11 * 60 + 13 + 100123456. / 1e9]
- tm.assert_almost_equal(rng.total_seconds(), expt)
+ tm.assert_almost_equal(rng.total_seconds(), np.array(expt))
# test Series
s = Series(rng)
@@ -1288,7 +1288,7 @@ def test_constructor(self):
def test_constructor_coverage(self):
rng = timedelta_range('1 days', periods=10.5)
exp = timedelta_range('1 days', periods=10)
- self.assertTrue(rng.equals(exp))
+ self.assert_index_equal(rng, exp)
self.assertRaises(ValueError, TimedeltaIndex, start='1 days',
periods='foo', freq='D')
@@ -1302,16 +1302,16 @@ def test_constructor_coverage(self):
gen = (timedelta(i) for i in range(10))
result = TimedeltaIndex(gen)
expected = TimedeltaIndex([timedelta(i) for i in range(10)])
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
# NumPy string array
strings = np.array(['1 days', '2 days', '3 days'])
result = TimedeltaIndex(strings)
expected = to_timedelta([1, 2, 3], unit='d')
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
from_ints = TimedeltaIndex(expected.asi8)
- self.assertTrue(from_ints.equals(expected))
+ self.assert_index_equal(from_ints, expected)
# non-conforming freq
self.assertRaises(ValueError, TimedeltaIndex,
@@ -1438,7 +1438,7 @@ def test_map(self):
f = lambda x: x.days
result = rng.map(f)
- exp = [f(x) for x in rng]
+ exp = np.array([f(x) for x in rng], dtype=np.int64)
self.assert_numpy_array_equal(result, exp)
def test_misc_coverage(self):
@@ -1459,7 +1459,7 @@ def test_union(self):
i2 = timedelta_range('3day', periods=5)
result = i1.union(i2)
expected = timedelta_range('1day', periods=7)
- self.assert_numpy_array_equal(result, expected)
+ self.assert_index_equal(result, expected)
i1 = Int64Index(np.arange(0, 20, 2))
i2 = TimedeltaIndex(start='1 day', periods=10, freq='D')
@@ -1471,10 +1471,10 @@ def test_union_coverage(self):
idx = TimedeltaIndex(['3d', '1d', '2d'])
ordered = TimedeltaIndex(idx.sort_values(), freq='infer')
result = ordered.union(idx)
- self.assertTrue(result.equals(ordered))
+ self.assert_index_equal(result, ordered)
result = ordered[:0].union(ordered)
- self.assertTrue(result.equals(ordered))
+ self.assert_index_equal(result, ordered)
self.assertEqual(result.freq, ordered.freq)
def test_union_bug_1730(self):
@@ -1484,18 +1484,18 @@ def test_union_bug_1730(self):
result = rng_a.union(rng_b)
exp = TimedeltaIndex(sorted(set(list(rng_a)) | set(list(rng_b))))
- self.assertTrue(result.equals(exp))
+ self.assert_index_equal(result, exp)
def test_union_bug_1745(self):
left = TimedeltaIndex(['1 day 15:19:49.695000'])
- right = TimedeltaIndex(
- ['2 day 13:04:21.322000', '1 day 15:27:24.873000',
- '1 day 15:31:05.350000'])
+ right = TimedeltaIndex(['2 day 13:04:21.322000',
+ '1 day 15:27:24.873000',
+ '1 day 15:31:05.350000'])
result = left.union(right)
exp = TimedeltaIndex(sorted(set(list(left)) | set(list(right))))
- self.assertTrue(result.equals(exp))
+ self.assert_index_equal(result, exp)
def test_union_bug_4564(self):
@@ -1504,7 +1504,7 @@ def test_union_bug_4564(self):
result = left.union(right)
exp = TimedeltaIndex(sorted(set(list(left)) | set(list(right))))
- self.assertTrue(result.equals(exp))
+ self.assert_index_equal(result, exp)
def test_intersection_bug_1708(self):
index_1 = timedelta_range('1 day', periods=4, freq='h')
@@ -1526,7 +1526,7 @@ def test_get_duplicates(self):
result = idx.get_duplicates()
ex = TimedeltaIndex(['2 day', '3day'])
- self.assertTrue(result.equals(ex))
+ self.assert_index_equal(result, ex)
def test_argmin_argmax(self):
idx = TimedeltaIndex(['1 day 00:00:05', '1 day 00:00:01',
@@ -1546,11 +1546,13 @@ def test_sort_values(self):
ordered, dexer = idx.sort_values(return_indexer=True)
self.assertTrue(ordered.is_monotonic)
- self.assert_numpy_array_equal(dexer, [1, 2, 0])
+ self.assert_numpy_array_equal(dexer,
+ np.array([1, 2, 0], dtype=np.int64))
ordered, dexer = idx.sort_values(return_indexer=True, ascending=False)
self.assertTrue(ordered[::-1].is_monotonic)
- self.assert_numpy_array_equal(dexer, [0, 2, 1])
+ self.assert_numpy_array_equal(dexer,
+ np.array([0, 2, 1], dtype=np.int64))
def test_insert(self):
@@ -1558,7 +1560,7 @@ def test_insert(self):
result = idx.insert(2, timedelta(days=5))
exp = TimedeltaIndex(['4day', '1day', '5day', '2day'], name='idx')
- self.assertTrue(result.equals(exp))
+ self.assert_index_equal(result, exp)
# insertion of non-datetime should coerce to object index
result = idx.insert(1, 'inserted')
@@ -1594,7 +1596,7 @@ def test_insert(self):
for n, d, expected in cases:
result = idx.insert(n, d)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(result.freq, expected.freq)
@@ -1618,7 +1620,7 @@ def test_delete(self):
1: expected_1}
for n, expected in compat.iteritems(cases):
result = idx.delete(n)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(result.freq, expected.freq)
@@ -1645,12 +1647,12 @@ def test_delete_slice(self):
(3, 4, 5): expected_3_5}
for n, expected in compat.iteritems(cases):
result = idx.delete(n)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(result.freq, expected.freq)
result = idx.delete(slice(n[0], n[-1] + 1))
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(result.freq, expected.freq)
@@ -1664,7 +1666,7 @@ def test_take(self):
taken2 = idx[[2, 4, 10]]
for taken in [taken1, taken2]:
- self.assertTrue(taken.equals(expected))
+ self.assert_index_equal(taken, expected)
tm.assertIsInstance(taken, TimedeltaIndex)
self.assertIsNone(taken.freq)
self.assertEqual(taken.name, expected.name)
@@ -1711,7 +1713,7 @@ def test_isin(self):
self.assertTrue(result.all())
assert_almost_equal(index.isin([index[2], 5]),
- [False, False, True, False])
+ np.array([False, False, True, False]))
def test_does_not_convert_mixed_integer(self):
df = tm.makeCustomDataframe(10, 10,
@@ -1748,18 +1750,18 @@ def test_factorize(self):
arr, idx = idx1.factorize()
self.assert_numpy_array_equal(arr, exp_arr)
- self.assertTrue(idx.equals(exp_idx))
+ self.assert_index_equal(idx, exp_idx)
arr, idx = idx1.factorize(sort=True)
self.assert_numpy_array_equal(arr, exp_arr)
- self.assertTrue(idx.equals(exp_idx))
+ self.assert_index_equal(idx, exp_idx)
# freq must be preserved
idx3 = timedelta_range('1 day', periods=4, freq='s')
exp_arr = np.array([0, 1, 2, 3])
arr, idx = idx3.factorize()
self.assert_numpy_array_equal(arr, exp_arr)
- self.assertTrue(idx.equals(idx3))
+ self.assert_index_equal(idx, idx3)
class TestSlicing(tm.TestCase):
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 3a3315ed3890c..f6d80f7ee410b 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -59,7 +59,7 @@ def test_index_unique(self):
expected = DatetimeIndex([datetime(2000, 1, 2), datetime(2000, 1, 3),
datetime(2000, 1, 4), datetime(2000, 1, 5)])
self.assertEqual(uniques.dtype, 'M8[ns]') # sanity
- self.assertTrue(uniques.equals(expected))
+ tm.assert_index_equal(uniques, expected)
self.assertEqual(self.dups.index.nunique(), 4)
# #2563
@@ -68,22 +68,23 @@ def test_index_unique(self):
dups_local = self.dups.index.tz_localize('US/Eastern')
dups_local.name = 'foo'
result = dups_local.unique()
- expected = DatetimeIndex(expected).tz_localize('US/Eastern')
+ expected = DatetimeIndex(expected, name='foo')
+ expected = expected.tz_localize('US/Eastern')
self.assertTrue(result.tz is not None)
self.assertEqual(result.name, 'foo')
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
# NaT, note this is excluded
arr = [1370745748 + t for t in range(20)] + [iNaT]
idx = DatetimeIndex(arr * 3)
- self.assertTrue(idx.unique().equals(DatetimeIndex(arr)))
+ tm.assert_index_equal(idx.unique(), DatetimeIndex(arr))
self.assertEqual(idx.nunique(), 20)
self.assertEqual(idx.nunique(dropna=False), 21)
arr = [Timestamp('2013-06-09 02:42:28') + timedelta(seconds=t)
for t in range(20)] + [NaT]
idx = DatetimeIndex(arr * 3)
- self.assertTrue(idx.unique().equals(DatetimeIndex(arr)))
+ tm.assert_index_equal(idx.unique(), DatetimeIndex(arr))
self.assertEqual(idx.nunique(), 20)
self.assertEqual(idx.nunique(dropna=False), 21)
@@ -284,12 +285,12 @@ def test_recreate_from_data(self):
for f in freqs:
org = DatetimeIndex(start='2001/02/01 09:00', freq=f, periods=1)
idx = DatetimeIndex(org, freq=f)
- self.assertTrue(idx.equals(org))
+ tm.assert_index_equal(idx, org)
org = DatetimeIndex(start='2001/02/01 09:00', freq=f,
tz='US/Pacific', periods=1)
idx = DatetimeIndex(org, freq=f, tz='US/Pacific')
- self.assertTrue(idx.equals(org))
+ tm.assert_index_equal(idx, org)
def assert_range_equal(left, right):
@@ -874,7 +875,7 @@ def test_string_na_nat_conversion(self):
result2 = to_datetime(strings)
tm.assertIsInstance(result2, DatetimeIndex)
- tm.assert_numpy_array_equal(result, result2)
+ tm.assert_numpy_array_equal(result, result2.values)
malformed = np.array(['1/100/2000', np.nan], dtype=object)
@@ -1065,7 +1066,7 @@ def test_to_datetime_list_of_integers(self):
result = DatetimeIndex(ints)
- self.assertTrue(rng.equals(result))
+ tm.assert_index_equal(rng, result)
def test_to_datetime_freq(self):
xp = bdate_range('2000-1-1', periods=10, tz='UTC')
@@ -1162,15 +1163,15 @@ def test_date_range_gen_error(self):
def test_date_range_negative_freq(self):
# GH 11018
rng = date_range('2011-12-31', freq='-2A', periods=3)
- exp = pd.DatetimeIndex(
- ['2011-12-31', '2009-12-31', '2007-12-31'], freq='-2A')
- self.assert_index_equal(rng, exp)
+ exp = pd.DatetimeIndex(['2011-12-31', '2009-12-31',
+ '2007-12-31'], freq='-2A')
+ tm.assert_index_equal(rng, exp)
self.assertEqual(rng.freq, '-2A')
rng = date_range('2011-01-31', freq='-2M', periods=3)
- exp = pd.DatetimeIndex(
- ['2011-01-31', '2010-11-30', '2010-09-30'], freq='-2M')
- self.assert_index_equal(rng, exp)
+ exp = pd.DatetimeIndex(['2011-01-31', '2010-11-30',
+ '2010-09-30'], freq='-2M')
+ tm.assert_index_equal(rng, exp)
self.assertEqual(rng.freq, '-2M')
def test_date_range_bms_bug(self):
@@ -1523,7 +1524,7 @@ def test_normalize(self):
result = rng.normalize()
expected = date_range('1/1/2000', periods=10, freq='D')
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
rng_ns = pd.DatetimeIndex(np.array([1380585623454345752,
1380585612343234312]).astype(
@@ -1532,7 +1533,7 @@ def test_normalize(self):
expected = pd.DatetimeIndex(np.array([1380585600000000000,
1380585600000000000]).astype(
"datetime64[ns]"))
- self.assertTrue(rng_ns_normalized.equals(expected))
+ tm.assert_index_equal(rng_ns_normalized, expected)
self.assertTrue(result.is_normalized)
self.assertFalse(rng.is_normalized)
@@ -1549,7 +1550,7 @@ def test_to_period(self):
pts = ts.to_period('M')
exp.index = exp.index.asfreq('M')
- self.assertTrue(pts.index.equals(exp.index.asfreq('M')))
+ tm.assert_index_equal(pts.index, exp.index.asfreq('M'))
assert_series_equal(pts, exp)
# GH 7606 without freq
@@ -1607,7 +1608,7 @@ def test_to_period_tz_pytz(self):
expected = ts[0].to_period()
self.assertEqual(result, expected)
- self.assertTrue(ts.to_period().equals(xp))
+ tm.assert_index_equal(ts.to_period(), xp)
ts = date_range('1/1/2000', '4/1/2000', tz=UTC)
@@ -1615,7 +1616,7 @@ def test_to_period_tz_pytz(self):
expected = ts[0].to_period()
self.assertEqual(result, expected)
- self.assertTrue(ts.to_period().equals(xp))
+ tm.assert_index_equal(ts.to_period(), xp)
ts = date_range('1/1/2000', '4/1/2000', tz=tzlocal())
@@ -1623,7 +1624,7 @@ def test_to_period_tz_pytz(self):
expected = ts[0].to_period()
self.assertEqual(result, expected)
- self.assertTrue(ts.to_period().equals(xp))
+ tm.assert_index_equal(ts.to_period(), xp)
def test_to_period_tz_explicit_pytz(self):
tm._skip_if_no_pytz()
@@ -1638,7 +1639,7 @@ def test_to_period_tz_explicit_pytz(self):
expected = ts[0].to_period()
self.assertTrue(result == expected)
- self.assertTrue(ts.to_period().equals(xp))
+ tm.assert_index_equal(ts.to_period(), xp)
ts = date_range('1/1/2000', '4/1/2000', tz=pytz.utc)
@@ -1646,7 +1647,7 @@ def test_to_period_tz_explicit_pytz(self):
expected = ts[0].to_period()
self.assertTrue(result == expected)
- self.assertTrue(ts.to_period().equals(xp))
+ tm.assert_index_equal(ts.to_period(), xp)
ts = date_range('1/1/2000', '4/1/2000', tz=tzlocal())
@@ -1654,7 +1655,7 @@ def test_to_period_tz_explicit_pytz(self):
expected = ts[0].to_period()
self.assertTrue(result == expected)
- self.assertTrue(ts.to_period().equals(xp))
+ tm.assert_index_equal(ts.to_period(), xp)
def test_to_period_tz_dateutil(self):
tm._skip_if_no_dateutil()
@@ -1669,7 +1670,7 @@ def test_to_period_tz_dateutil(self):
expected = ts[0].to_period()
self.assertTrue(result == expected)
- self.assertTrue(ts.to_period().equals(xp))
+ tm.assert_index_equal(ts.to_period(), xp)
ts = date_range('1/1/2000', '4/1/2000', tz=dateutil.tz.tzutc())
@@ -1677,7 +1678,7 @@ def test_to_period_tz_dateutil(self):
expected = ts[0].to_period()
self.assertTrue(result == expected)
- self.assertTrue(ts.to_period().equals(xp))
+ tm.assert_index_equal(ts.to_period(), xp)
ts = date_range('1/1/2000', '4/1/2000', tz=tzlocal())
@@ -1685,7 +1686,7 @@ def test_to_period_tz_dateutil(self):
expected = ts[0].to_period()
self.assertTrue(result == expected)
- self.assertTrue(ts.to_period().equals(xp))
+ tm.assert_index_equal(ts.to_period(), xp)
def test_frame_to_period(self):
K = 5
@@ -1702,7 +1703,7 @@ def test_frame_to_period(self):
assert_frame_equal(pts, exp)
pts = df.to_period('M')
- self.assertTrue(pts.index.equals(exp.index.asfreq('M')))
+ tm.assert_index_equal(pts.index, exp.index.asfreq('M'))
df = df.T
pts = df.to_period(axis=1)
@@ -1711,7 +1712,7 @@ def test_frame_to_period(self):
assert_frame_equal(pts, exp)
pts = df.to_period('M', axis=1)
- self.assertTrue(pts.columns.equals(exp.columns.asfreq('M')))
+ tm.assert_index_equal(pts.columns, exp.columns.asfreq('M'))
self.assertRaises(ValueError, df.to_period, axis=2)
@@ -1799,11 +1800,11 @@ def test_datetimeindex_integers_shift(self):
result = rng + 5
expected = rng.shift(5)
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
result = rng - 5
expected = rng.shift(-5)
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
def test_astype_object(self):
# NumPy 1.6.1 weak ns support
@@ -1812,7 +1813,8 @@ def test_astype_object(self):
casted = rng.astype('O')
exp_values = list(rng)
- self.assert_numpy_array_equal(casted, exp_values)
+ tm.assert_index_equal(casted, Index(exp_values, dtype=np.object_))
+ self.assertEqual(casted.tolist(), exp_values)
def test_catch_infinite_loop(self):
offset = datetools.DateOffset(minute=5)
@@ -1828,15 +1830,15 @@ def test_append_concat(self):
result = ts.append(ts)
result_df = df.append(df)
ex_index = DatetimeIndex(np.tile(rng.values, 2))
- self.assertTrue(result.index.equals(ex_index))
- self.assertTrue(result_df.index.equals(ex_index))
+ tm.assert_index_equal(result.index, ex_index)
+ tm.assert_index_equal(result_df.index, ex_index)
appended = rng.append(rng)
- self.assertTrue(appended.equals(ex_index))
+ tm.assert_index_equal(appended, ex_index)
appended = rng.append([rng, rng])
ex_index = DatetimeIndex(np.tile(rng.values, 3))
- self.assertTrue(appended.equals(ex_index))
+ tm.assert_index_equal(appended, ex_index)
# different index names
rng1 = rng.copy()
@@ -1863,11 +1865,11 @@ def test_append_concat_tz(self):
result = ts.append(ts2)
result_df = df.append(df2)
- self.assertTrue(result.index.equals(rng3))
- self.assertTrue(result_df.index.equals(rng3))
+ tm.assert_index_equal(result.index, rng3)
+ tm.assert_index_equal(result_df.index, rng3)
appended = rng.append(rng2)
- self.assertTrue(appended.equals(rng3))
+ tm.assert_index_equal(appended, rng3)
def test_append_concat_tz_explicit_pytz(self):
# GH 2938
@@ -1887,11 +1889,11 @@ def test_append_concat_tz_explicit_pytz(self):
result = ts.append(ts2)
result_df = df.append(df2)
- self.assertTrue(result.index.equals(rng3))
- self.assertTrue(result_df.index.equals(rng3))
+ tm.assert_index_equal(result.index, rng3)
+ tm.assert_index_equal(result_df.index, rng3)
appended = rng.append(rng2)
- self.assertTrue(appended.equals(rng3))
+ tm.assert_index_equal(appended, rng3)
def test_append_concat_tz_dateutil(self):
# GH 2938
@@ -1909,11 +1911,11 @@ def test_append_concat_tz_dateutil(self):
result = ts.append(ts2)
result_df = df.append(df2)
- self.assertTrue(result.index.equals(rng3))
- self.assertTrue(result_df.index.equals(rng3))
+ tm.assert_index_equal(result.index, rng3)
+ tm.assert_index_equal(result_df.index, rng3)
appended = rng.append(rng2)
- self.assertTrue(appended.equals(rng3))
+ tm.assert_index_equal(appended, rng3)
def test_set_dataframe_column_ns_dtype(self):
x = DataFrame([datetime.now(), datetime.now()])
@@ -2440,13 +2442,13 @@ def test_index_to_datetime(self):
result = idx.to_datetime()
expected = DatetimeIndex(datetools.to_datetime(idx.values))
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
today = datetime.today()
idx = Index([today], dtype=object)
result = idx.to_datetime()
expected = DatetimeIndex([today])
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
def test_dataframe(self):
@@ -2596,14 +2598,14 @@ def test_to_period_nofreq(self):
idx = DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-03'],
freq='infer')
self.assertEqual(idx.freqstr, 'D')
- expected = pd.PeriodIndex(
- ['2000-01-01', '2000-01-02', '2000-01-03'], freq='D')
- self.assertTrue(idx.to_period().equals(expected))
+ expected = pd.PeriodIndex(['2000-01-01', '2000-01-02',
+ '2000-01-03'], freq='D')
+ tm.assert_index_equal(idx.to_period(), expected)
# GH 7606
idx = DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-03'])
self.assertEqual(idx.freqstr, None)
- self.assertTrue(idx.to_period().equals(expected))
+ tm.assert_index_equal(idx.to_period(), expected)
def test_000constructor_resolution(self):
# 2252
@@ -2615,7 +2617,7 @@ def test_000constructor_resolution(self):
def test_constructor_coverage(self):
rng = date_range('1/1/2000', periods=10.5)
exp = date_range('1/1/2000', periods=10)
- self.assertTrue(rng.equals(exp))
+ tm.assert_index_equal(rng, exp)
self.assertRaises(ValueError, DatetimeIndex, start='1/1/2000',
periods='foo', freq='D')
@@ -2630,25 +2632,25 @@ def test_constructor_coverage(self):
result = DatetimeIndex(gen)
expected = DatetimeIndex([datetime(2000, 1, 1) + timedelta(i)
for i in range(10)])
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
# NumPy string array
strings = np.array(['2000-01-01', '2000-01-02', '2000-01-03'])
result = DatetimeIndex(strings)
expected = DatetimeIndex(strings.astype('O'))
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
from_ints = DatetimeIndex(expected.asi8)
- self.assertTrue(from_ints.equals(expected))
+ tm.assert_index_equal(from_ints, expected)
# string with NaT
strings = np.array(['2000-01-01', '2000-01-02', 'NaT'])
result = DatetimeIndex(strings)
expected = DatetimeIndex(strings.astype('O'))
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
from_ints = DatetimeIndex(expected.asi8)
- self.assertTrue(from_ints.equals(expected))
+ tm.assert_index_equal(from_ints, expected)
# non-conforming
self.assertRaises(ValueError, DatetimeIndex,
@@ -2715,17 +2717,15 @@ def test_constructor_datetime64_tzformat(self):
def test_constructor_dtype(self):
# passing a dtype with a tz should localize
- idx = DatetimeIndex(['2013-01-01',
- '2013-01-02'],
+ idx = DatetimeIndex(['2013-01-01', '2013-01-02'],
dtype='datetime64[ns, US/Eastern]')
expected = DatetimeIndex(['2013-01-01', '2013-01-02']
).tz_localize('US/Eastern')
- self.assertTrue(idx.equals(expected))
+ tm.assert_index_equal(idx, expected)
- idx = DatetimeIndex(['2013-01-01',
- '2013-01-02'],
+ idx = DatetimeIndex(['2013-01-01', '2013-01-02'],
tz='US/Eastern')
- self.assertTrue(idx.equals(expected))
+ tm.assert_index_equal(idx, expected)
# if we already have a tz and its not the same, then raise
idx = DatetimeIndex(['2013-01-01', '2013-01-02'],
@@ -2744,7 +2744,7 @@ def test_constructor_dtype(self):
idx, tz='CET',
dtype='datetime64[ns, US/Eastern]'))
result = DatetimeIndex(idx, dtype='datetime64[ns, US/Eastern]')
- self.assertTrue(idx.equals(result))
+ tm.assert_index_equal(idx, result)
def test_constructor_name(self):
idx = DatetimeIndex(start='2000-01-01', periods=1, freq='A',
@@ -2860,7 +2860,7 @@ def test_map(self):
f = lambda x: x.strftime('%Y%m%d')
result = rng.map(f)
- exp = [f(x) for x in rng]
+ exp = np.array([f(x) for x in rng], dtype='<U8')
tm.assert_almost_equal(result, exp)
def test_iteration_preserves_tz(self):
@@ -2909,10 +2909,10 @@ def test_union_coverage(self):
idx = DatetimeIndex(['2000-01-03', '2000-01-01', '2000-01-02'])
ordered = DatetimeIndex(idx.sort_values(), freq='infer')
result = ordered.union(idx)
- self.assertTrue(result.equals(ordered))
+ tm.assert_index_equal(result, ordered)
result = ordered[:0].union(ordered)
- self.assertTrue(result.equals(ordered))
+ tm.assert_index_equal(result, ordered)
self.assertEqual(result.freq, ordered.freq)
def test_union_bug_1730(self):
@@ -2921,17 +2921,17 @@ def test_union_bug_1730(self):
result = rng_a.union(rng_b)
exp = DatetimeIndex(sorted(set(list(rng_a)) | set(list(rng_b))))
- self.assertTrue(result.equals(exp))
+ tm.assert_index_equal(result, exp)
def test_union_bug_1745(self):
left = DatetimeIndex(['2012-05-11 15:19:49.695000'])
- right = DatetimeIndex(
- ['2012-05-29 13:04:21.322000', '2012-05-11 15:27:24.873000',
- '2012-05-11 15:31:05.350000'])
+ right = DatetimeIndex(['2012-05-29 13:04:21.322000',
+ '2012-05-11 15:27:24.873000',
+ '2012-05-11 15:31:05.350000'])
result = left.union(right)
exp = DatetimeIndex(sorted(set(list(left)) | set(list(right))))
- self.assertTrue(result.equals(exp))
+ tm.assert_index_equal(result, exp)
def test_union_bug_4564(self):
from pandas import DateOffset
@@ -2940,7 +2940,7 @@ def test_union_bug_4564(self):
result = left.union(right)
exp = DatetimeIndex(sorted(set(list(left)) | set(list(right))))
- self.assertTrue(result.equals(exp))
+ tm.assert_index_equal(result, exp)
def test_union_freq_both_none(self):
# GH11086
@@ -2960,7 +2960,7 @@ def test_union_dataframe_index(self):
df = DataFrame({'s1': s1, 's2': s2})
exp = pd.date_range('1/1/1980', '1/1/2012', freq='MS')
- self.assert_index_equal(df.index, exp)
+ tm.assert_index_equal(df.index, exp)
def test_intersection_bug_1708(self):
from pandas import DateOffset
@@ -2991,7 +2991,7 @@ def test_intersection(self):
for (rng, expected) in [(rng2, expected2), (rng3, expected3),
(rng4, expected4)]:
result = base.intersection(rng)
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(result.freq, expected.freq)
self.assertEqual(result.tz, expected.tz)
@@ -3021,7 +3021,7 @@ def test_intersection(self):
for (rng, expected) in [(rng2, expected2), (rng3, expected3),
(rng4, expected4)]:
result = base.intersection(rng)
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertIsNone(result.freq)
self.assertEqual(result.tz, expected.tz)
@@ -3168,7 +3168,7 @@ def test_get_duplicates(self):
result = idx.get_duplicates()
ex = DatetimeIndex(['2000-01-02', '2000-01-03'])
- self.assertTrue(result.equals(ex))
+ tm.assert_index_equal(result, ex)
def test_argmin_argmax(self):
idx = DatetimeIndex(['2000-01-04', '2000-01-01', '2000-01-02'])
@@ -3186,11 +3186,13 @@ def test_sort_values(self):
ordered, dexer = idx.sort_values(return_indexer=True)
self.assertTrue(ordered.is_monotonic)
- self.assert_numpy_array_equal(dexer, [1, 2, 0])
+ self.assert_numpy_array_equal(dexer,
+ np.array([1, 2, 0], dtype=np.intp))
ordered, dexer = idx.sort_values(return_indexer=True, ascending=False)
self.assertTrue(ordered[::-1].is_monotonic)
- self.assert_numpy_array_equal(dexer, [0, 2, 1])
+ self.assert_numpy_array_equal(dexer,
+ np.array([0, 2, 1], dtype=np.intp))
def test_round(self):
@@ -3267,7 +3269,7 @@ def test_insert(self):
result = idx.insert(2, datetime(2000, 1, 5))
exp = DatetimeIndex(['2000-01-04', '2000-01-01', '2000-01-05',
'2000-01-02'], name='idx')
- self.assertTrue(result.equals(exp))
+ tm.assert_index_equal(result, exp)
# insertion of non-datetime should coerce to object index
result = idx.insert(1, 'inserted')
@@ -3304,7 +3306,7 @@ def test_insert(self):
for n, d, expected in cases:
result = idx.insert(n, d)
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(result.freq, expected.freq)
@@ -3312,7 +3314,7 @@ def test_insert(self):
result = idx.insert(3, datetime(2000, 1, 2))
expected = DatetimeIndex(['2000-01-31', '2000-02-29', '2000-03-31',
'2000-01-02'], name='idx', freq=None)
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertTrue(result.freq is None)
@@ -3343,7 +3345,7 @@ def test_insert(self):
pytz.timezone(tz).localize(datetime(2000, 1, 1, 15))]:
result = idx.insert(6, d)
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(result.freq, expected.freq)
self.assertEqual(result.tz, expected.tz)
@@ -3358,7 +3360,7 @@ def test_insert(self):
for d in [pd.Timestamp('2000-01-01 10:00', tz=tz),
pytz.timezone(tz).localize(datetime(2000, 1, 1, 10))]:
result = idx.insert(6, d)
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertTrue(result.freq is None)
self.assertEqual(result.tz, expected.tz)
@@ -3383,7 +3385,7 @@ def test_delete(self):
1: expected_1}
for n, expected in compat.iteritems(cases):
result = idx.delete(n)
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(result.freq, expected.freq)
@@ -3398,7 +3400,7 @@ def test_delete(self):
expected = date_range(start='2000-01-01 10:00', periods=9,
freq='H', name='idx', tz=tz)
result = idx.delete(0)
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(result.freqstr, 'H')
self.assertEqual(result.tz, expected.tz)
@@ -3406,7 +3408,7 @@ def test_delete(self):
expected = date_range(start='2000-01-01 09:00', periods=9,
freq='H', name='idx', tz=tz)
result = idx.delete(-1)
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(result.freqstr, 'H')
self.assertEqual(result.tz, expected.tz)
@@ -3430,12 +3432,12 @@ def test_delete_slice(self):
(3, 4, 5): expected_3_5}
for n, expected in compat.iteritems(cases):
result = idx.delete(n)
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(result.freq, expected.freq)
result = idx.delete(slice(n[0], n[-1] + 1))
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(result.freq, expected.freq)
@@ -3446,7 +3448,7 @@ def test_delete_slice(self):
result = ts.drop(ts.index[:5]).index
expected = pd.date_range('2000-01-01 14:00', periods=5, freq='H',
name='idx', tz=tz)
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(result.freq, expected.freq)
self.assertEqual(result.tz, expected.tz)
@@ -3457,7 +3459,7 @@ def test_delete_slice(self):
'2000-01-01 13:00',
'2000-01-01 15:00', '2000-01-01 17:00'],
freq=None, name='idx', tz=tz)
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(result.freq, expected.freq)
self.assertEqual(result.tz, expected.tz)
@@ -3476,7 +3478,7 @@ def test_take(self):
taken2 = idx[[5, 6, 8, 12]]
for taken in [taken1, taken2]:
- self.assertTrue(taken.equals(expected))
+ tm.assert_index_equal(taken, expected)
tm.assertIsInstance(taken, DatetimeIndex)
self.assertIsNone(taken.freq)
self.assertEqual(taken.tz, expected.tz)
@@ -3579,14 +3581,14 @@ def test_isin(self):
self.assertTrue(result.all())
assert_almost_equal(index.isin([index[2], 5]),
- [False, False, True, False])
+ np.array([False, False, True, False]))
def test_union(self):
i1 = Int64Index(np.arange(0, 20, 2))
i2 = Int64Index(np.arange(10, 30, 2))
result = i1.union(i2)
expected = Int64Index(np.arange(0, 30, 2))
- self.assert_numpy_array_equal(result, expected)
+ tm.assert_index_equal(result, expected)
def test_union_with_DatetimeIndex(self):
i1 = Int64Index(np.arange(0, 20, 2))
@@ -3669,11 +3671,11 @@ def test_factorize(self):
arr, idx = idx1.factorize()
self.assert_numpy_array_equal(arr, exp_arr)
- self.assertTrue(idx.equals(exp_idx))
+ tm.assert_index_equal(idx, exp_idx)
arr, idx = idx1.factorize(sort=True)
self.assert_numpy_array_equal(arr, exp_arr)
- self.assertTrue(idx.equals(exp_idx))
+ tm.assert_index_equal(idx, exp_idx)
# tz must be preserved
idx1 = idx1.tz_localize('Asia/Tokyo')
@@ -3681,7 +3683,7 @@ def test_factorize(self):
arr, idx = idx1.factorize()
self.assert_numpy_array_equal(arr, exp_arr)
- self.assertTrue(idx.equals(exp_idx))
+ tm.assert_index_equal(idx, exp_idx)
idx2 = pd.DatetimeIndex(['2014-03', '2014-03', '2014-02', '2014-01',
'2014-03', '2014-01'])
@@ -3690,20 +3692,20 @@ def test_factorize(self):
exp_idx = DatetimeIndex(['2014-01', '2014-02', '2014-03'])
arr, idx = idx2.factorize(sort=True)
self.assert_numpy_array_equal(arr, exp_arr)
- self.assertTrue(idx.equals(exp_idx))
+ tm.assert_index_equal(idx, exp_idx)
exp_arr = np.array([0, 0, 1, 2, 0, 2])
exp_idx = DatetimeIndex(['2014-03', '2014-02', '2014-01'])
arr, idx = idx2.factorize()
self.assert_numpy_array_equal(arr, exp_arr)
- self.assertTrue(idx.equals(exp_idx))
+ tm.assert_index_equal(idx, exp_idx)
# freq must be preserved
idx3 = date_range('2000-01', periods=4, freq='M', tz='Asia/Tokyo')
exp_arr = np.array([0, 1, 2, 3])
arr, idx = idx3.factorize()
self.assert_numpy_array_equal(arr, exp_arr)
- self.assertTrue(idx.equals(idx3))
+ tm.assert_index_equal(idx, idx3)
def test_slice_with_negative_step(self):
ts = Series(np.arange(20),
@@ -3955,7 +3957,7 @@ def test_datetimeindex_constructor(self):
idx7 = DatetimeIndex(['12/05/2007', '25/01/2008'], dayfirst=True)
idx8 = DatetimeIndex(['2007/05/12', '2008/01/25'], dayfirst=False,
yearfirst=True)
- self.assertTrue(idx7.equals(idx8))
+ tm.assert_index_equal(idx7, idx8)
for other in [idx2, idx3, idx4, idx5, idx6]:
self.assertTrue((idx1.values == other.values).all())
@@ -4001,12 +4003,12 @@ def test_dayfirst(self):
idx4 = to_datetime(np.array(arr), dayfirst=True)
idx5 = DatetimeIndex(Index(arr), dayfirst=True)
idx6 = DatetimeIndex(Series(arr), dayfirst=True)
- self.assertTrue(expected.equals(idx1))
- self.assertTrue(expected.equals(idx2))
- self.assertTrue(expected.equals(idx3))
- self.assertTrue(expected.equals(idx4))
- self.assertTrue(expected.equals(idx5))
- self.assertTrue(expected.equals(idx6))
+ tm.assert_index_equal(expected, idx1)
+ tm.assert_index_equal(expected, idx2)
+ tm.assert_index_equal(expected, idx3)
+ tm.assert_index_equal(expected, idx4)
+ tm.assert_index_equal(expected, idx5)
+ tm.assert_index_equal(expected, idx6)
def test_dti_snap(self):
dti = DatetimeIndex(['1/1/2002', '1/2/2002', '1/3/2002', '1/4/2002',
@@ -4046,9 +4048,9 @@ def test_dti_set_index_reindex(self):
idx2 = date_range('2013', periods=6, freq='A', tz='Asia/Tokyo')
df = df.set_index(idx1)
- self.assertTrue(df.index.equals(idx1))
+ tm.assert_index_equal(df.index, idx1)
df = df.reindex(idx2)
- self.assertTrue(df.index.equals(idx2))
+ tm.assert_index_equal(df.index, idx2)
# 11314
# with tz
@@ -4163,13 +4165,13 @@ def test_constructor_cast_object(self):
def test_series_comparison_scalars(self):
val = datetime(2000, 1, 4)
result = self.series > val
- expected = np.array([x > val for x in self.series])
- self.assert_numpy_array_equal(result, expected)
+ expected = Series([x > val for x in self.series])
+ self.assert_series_equal(result, expected)
val = self.series[5]
result = self.series > val
- expected = np.array([x > val for x in self.series])
- self.assert_numpy_array_equal(result, expected)
+ expected = Series([x > val for x in self.series])
+ self.assert_series_equal(result, expected)
def test_between(self):
left, right = self.series[[2, 7]]
@@ -4775,10 +4777,9 @@ def test_date_range_normalize(self):
rng = date_range(snap, periods=n, normalize=False, freq='2D')
offset = timedelta(2)
- values = np.array([snap + i * offset for i in range(n)],
- dtype='M8[ns]')
+ values = DatetimeIndex([snap + i * offset for i in range(n)])
- self.assert_numpy_array_equal(rng, values)
+ tm.assert_index_equal(rng, values)
rng = date_range('1/1/2000 08:15', periods=n, normalize=False,
freq='B')
@@ -4797,7 +4798,7 @@ def test_timedelta(self):
result = index - timedelta(1)
expected = index + timedelta(-1)
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
# GH4134, buggy with timedeltas
rng = date_range('2013', '2014')
@@ -4806,8 +4807,8 @@ def test_timedelta(self):
result2 = DatetimeIndex(s - np.timedelta64(100000000))
result3 = rng - np.timedelta64(100000000)
result4 = DatetimeIndex(s - pd.offsets.Hour(1))
- self.assertTrue(result1.equals(result4))
- self.assertTrue(result2.equals(result3))
+ tm.assert_index_equal(result1, result4)
+ tm.assert_index_equal(result2, result3)
def test_shift(self):
ts = Series(np.random.randn(5),
@@ -4815,12 +4816,12 @@ def test_shift(self):
result = ts.shift(1, freq='5T')
exp_index = ts.index.shift(1, freq='5T')
- self.assertTrue(result.index.equals(exp_index))
+ tm.assert_index_equal(result.index, exp_index)
# GH #1063, multiple of same base
result = ts.shift(1, freq='4H')
exp_index = ts.index + datetools.Hour(4)
- self.assertTrue(result.index.equals(exp_index))
+ tm.assert_index_equal(result.index, exp_index)
idx = DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-04'])
self.assertRaises(ValueError, idx.shift, 1)
@@ -4972,7 +4973,7 @@ def test_to_datetime_format(self):
elif isinstance(expected, Timestamp):
self.assertEqual(result, expected)
else:
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
def test_to_datetime_format_YYYYMMDD(self):
s = Series([19801222, 19801222] + [19810105] * 5)
@@ -5003,9 +5004,10 @@ def test_to_datetime_format_YYYYMMDD(self):
# GH 7930
s = Series([20121231, 20141231, 99991231])
result = pd.to_datetime(s, format='%Y%m%d', errors='ignore')
- expected = np.array([datetime(2012, 12, 31), datetime(
- 2014, 12, 31), datetime(9999, 12, 31)], dtype=object)
- self.assert_numpy_array_equal(result, expected)
+ expected = Series([datetime(2012, 12, 31),
+ datetime(2014, 12, 31), datetime(9999, 12, 31)],
+ dtype=object)
+ self.assert_series_equal(result, expected)
result = pd.to_datetime(s, format='%Y%m%d', errors='coerce')
expected = Series(['20121231', '20141231', 'NaT'], dtype='M8[ns]')
@@ -5092,18 +5094,13 @@ def test_to_datetime_format_weeks(self):
class TestToDatetimeInferFormat(tm.TestCase):
def test_to_datetime_infer_datetime_format_consistent_format(self):
- time_series = pd.Series(pd.date_range('20000101', periods=50,
- freq='H'))
+ s = pd.Series(pd.date_range('20000101', periods=50, freq='H'))
- test_formats = [
- '%m-%d-%Y',
- '%m/%d/%Y %H:%M:%S.%f',
- '%Y-%m-%dT%H:%M:%S.%f',
- ]
+ test_formats = ['%m-%d-%Y', '%m/%d/%Y %H:%M:%S.%f',
+ '%Y-%m-%dT%H:%M:%S.%f']
for test_format in test_formats:
- s_as_dt_strings = time_series.apply(
- lambda x: x.strftime(test_format))
+ s_as_dt_strings = s.apply(lambda x: x.strftime(test_format))
with_format = pd.to_datetime(s_as_dt_strings, format=test_format)
no_infer = pd.to_datetime(s_as_dt_strings,
@@ -5113,70 +5110,45 @@ def test_to_datetime_infer_datetime_format_consistent_format(self):
# Whether the format is explicitly passed, it is inferred, or
# it is not inferred, the results should all be the same
- self.assert_numpy_array_equal(with_format, no_infer)
- self.assert_numpy_array_equal(no_infer, yes_infer)
+ self.assert_series_equal(with_format, no_infer)
+ self.assert_series_equal(no_infer, yes_infer)
def test_to_datetime_infer_datetime_format_inconsistent_format(self):
- test_series = pd.Series(np.array([
- '01/01/2011 00:00:00',
- '01-02-2011 00:00:00',
- '2011-01-03T00:00:00',
- ]))
+ s = pd.Series(np.array(['01/01/2011 00:00:00',
+ '01-02-2011 00:00:00',
+ '2011-01-03T00:00:00']))
# When the format is inconsistent, infer_datetime_format should just
# fallback to the default parsing
- self.assert_numpy_array_equal(
- pd.to_datetime(test_series, infer_datetime_format=False),
- pd.to_datetime(test_series, infer_datetime_format=True)
- )
+ tm.assert_series_equal(pd.to_datetime(s, infer_datetime_format=False),
+ pd.to_datetime(s, infer_datetime_format=True))
- test_series = pd.Series(np.array([
- 'Jan/01/2011',
- 'Feb/01/2011',
- 'Mar/01/2011',
- ]))
+ s = pd.Series(np.array(['Jan/01/2011', 'Feb/01/2011', 'Mar/01/2011']))
- self.assert_numpy_array_equal(
- pd.to_datetime(test_series, infer_datetime_format=False),
- pd.to_datetime(test_series, infer_datetime_format=True)
- )
+ tm.assert_series_equal(pd.to_datetime(s, infer_datetime_format=False),
+ pd.to_datetime(s, infer_datetime_format=True))
def test_to_datetime_infer_datetime_format_series_with_nans(self):
- test_series = pd.Series(np.array([
- '01/01/2011 00:00:00',
- np.nan,
- '01/03/2011 00:00:00',
- np.nan,
- ]))
-
- self.assert_numpy_array_equal(
- pd.to_datetime(test_series, infer_datetime_format=False),
- pd.to_datetime(test_series, infer_datetime_format=True)
- )
+ s = pd.Series(np.array(['01/01/2011 00:00:00', np.nan,
+ '01/03/2011 00:00:00', np.nan]))
+ tm.assert_series_equal(pd.to_datetime(s, infer_datetime_format=False),
+ pd.to_datetime(s, infer_datetime_format=True))
def test_to_datetime_infer_datetime_format_series_starting_with_nans(self):
- test_series = pd.Series(np.array([
- np.nan,
- np.nan,
- '01/01/2011 00:00:00',
- '01/02/2011 00:00:00',
- '01/03/2011 00:00:00',
- ]))
+ s = pd.Series(np.array([np.nan, np.nan, '01/01/2011 00:00:00',
+ '01/02/2011 00:00:00', '01/03/2011 00:00:00']))
- self.assert_numpy_array_equal(
- pd.to_datetime(test_series, infer_datetime_format=False),
- pd.to_datetime(test_series, infer_datetime_format=True)
- )
+ tm.assert_series_equal(pd.to_datetime(s, infer_datetime_format=False),
+ pd.to_datetime(s, infer_datetime_format=True))
def test_to_datetime_iso8601_noleading_0s(self):
# GH 11871
- test_series = pd.Series(['2014-1-1', '2014-2-2', '2015-3-3'])
+ s = pd.Series(['2014-1-1', '2014-2-2', '2015-3-3'])
expected = pd.Series([pd.Timestamp('2014-01-01'),
pd.Timestamp('2014-02-02'),
pd.Timestamp('2015-03-03')])
- tm.assert_series_equal(pd.to_datetime(test_series), expected)
- tm.assert_series_equal(pd.to_datetime(test_series, format='%Y-%m-%d'),
- expected)
+ tm.assert_series_equal(pd.to_datetime(s), expected)
+ tm.assert_series_equal(pd.to_datetime(s, format='%Y-%m-%d'), expected)
class TestGuessDatetimeFormat(tm.TestCase):
diff --git a/pandas/tseries/tests/test_timeseries_legacy.py b/pandas/tseries/tests/test_timeseries_legacy.py
index 086f23cd2d4fd..6f58ad3a57b48 100644
--- a/pandas/tseries/tests/test_timeseries_legacy.py
+++ b/pandas/tseries/tests/test_timeseries_legacy.py
@@ -85,7 +85,7 @@ def test_unpickle_legacy_len0_daterange(self):
ex_index = DatetimeIndex([], freq='B')
- self.assertTrue(result.index.equals(ex_index))
+ self.assert_index_equal(result.index, ex_index)
tm.assertIsInstance(result.index.freq, offsets.BDay)
self.assertEqual(len(result), 0)
@@ -116,7 +116,7 @@ def _check_join(left, right, how='inner'):
return_indexers=True)
tm.assertIsInstance(ra, DatetimeIndex)
- self.assertTrue(ra.equals(ea))
+ self.assert_index_equal(ra, ea)
assert_almost_equal(rb, eb)
assert_almost_equal(rc, ec)
@@ -150,24 +150,24 @@ def test_setops(self):
result = index[:5].union(obj_index[5:])
expected = index
tm.assertIsInstance(result, DatetimeIndex)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
result = index[:10].intersection(obj_index[5:])
expected = index[5:10]
tm.assertIsInstance(result, DatetimeIndex)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
result = index[:10] - obj_index[5:]
expected = index[:5]
tm.assertIsInstance(result, DatetimeIndex)
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
def test_index_conversion(self):
index = self.frame.index
obj_index = index.asobject
conv = DatetimeIndex(obj_index)
- self.assertTrue(conv.equals(index))
+ self.assert_index_equal(conv, index)
self.assertRaises(ValueError, DatetimeIndex, ['a', 'b', 'c', 'd'])
@@ -188,11 +188,11 @@ def test_setops_conversion_fail(self):
result = index.union(right)
expected = Index(np.concatenate([index.asobject, right]))
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
result = index.intersection(right)
expected = Index([])
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
def test_legacy_time_rules(self):
rules = [('WEEKDAY', 'B'), ('EOM', 'BM'), ('W@MON', 'W-MON'),
@@ -211,7 +211,7 @@ def test_legacy_time_rules(self):
for old_freq, new_freq in rules:
old_rng = date_range(start, end, freq=old_freq)
new_rng = date_range(start, end, freq=new_freq)
- self.assertTrue(old_rng.equals(new_rng))
+ self.assert_index_equal(old_rng, new_rng)
# test get_legacy_offset_name
offset = datetools.get_offset(new_freq)
diff --git a/pandas/tseries/tests/test_timezones.py b/pandas/tseries/tests/test_timezones.py
index 1f0632377c851..b80ee4c5c1e39 100644
--- a/pandas/tseries/tests/test_timezones.py
+++ b/pandas/tseries/tests/test_timezones.py
@@ -263,7 +263,7 @@ def test_create_with_fixed_tz(self):
self.assertEqual(off, rng.tz)
rng2 = date_range(start, periods=len(rng), tz=off)
- self.assertTrue(rng.equals(rng2))
+ self.assert_index_equal(rng, rng2)
rng3 = date_range('3/11/2012 05:00:00+07:00',
'6/11/2012 05:00:00+07:00')
@@ -287,7 +287,7 @@ def test_date_range_localize(self):
rng3 = date_range('3/11/2012 03:00', periods=15, freq='H')
rng3 = rng3.tz_localize('US/Eastern')
- self.assertTrue(rng.equals(rng3))
+ self.assert_index_equal(rng, rng3)
# DST transition time
val = rng[0]
@@ -296,14 +296,14 @@ def test_date_range_localize(self):
self.assertEqual(val.hour, 3)
self.assertEqual(exp.hour, 3)
self.assertEqual(val, exp) # same UTC value
- self.assertTrue(rng[:2].equals(rng2))
+ self.assert_index_equal(rng[:2], rng2)
# Right before the DST transition
rng = date_range('3/11/2012 00:00', periods=2, freq='H',
tz='US/Eastern')
rng2 = DatetimeIndex(['3/11/2012 00:00', '3/11/2012 01:00'],
tz='US/Eastern')
- self.assertTrue(rng.equals(rng2))
+ self.assert_index_equal(rng, rng2)
exp = Timestamp('3/11/2012 00:00', tz='US/Eastern')
self.assertEqual(exp.hour, 0)
self.assertEqual(rng[0], exp)
@@ -402,7 +402,7 @@ def test_tz_localize(self):
dr = bdate_range('1/1/2009', '1/1/2010')
dr_utc = bdate_range('1/1/2009', '1/1/2010', tz=pytz.utc)
localized = dr.tz_localize(pytz.utc)
- self.assert_numpy_array_equal(dr_utc, localized)
+ self.assert_index_equal(dr_utc, localized)
def test_with_tz_ambiguous_times(self):
tz = self.tz('US/Eastern')
@@ -440,22 +440,22 @@ def test_ambiguous_infer(self):
'11/06/2011 02:00', '11/06/2011 03:00']
di = DatetimeIndex(times)
localized = di.tz_localize(tz, ambiguous='infer')
- self.assert_numpy_array_equal(dr, localized)
+ self.assert_index_equal(dr, localized)
with tm.assert_produces_warning(FutureWarning):
localized_old = di.tz_localize(tz, infer_dst=True)
- self.assert_numpy_array_equal(dr, localized_old)
- self.assert_numpy_array_equal(dr, DatetimeIndex(times, tz=tz,
- ambiguous='infer'))
+ self.assert_index_equal(dr, localized_old)
+ self.assert_index_equal(dr, DatetimeIndex(times, tz=tz,
+ ambiguous='infer'))
# When there is no dst transition, nothing special happens
dr = date_range(datetime(2011, 6, 1, 0), periods=10,
freq=datetools.Hour())
localized = dr.tz_localize(tz)
localized_infer = dr.tz_localize(tz, ambiguous='infer')
- self.assert_numpy_array_equal(localized, localized_infer)
+ self.assert_index_equal(localized, localized_infer)
with tm.assert_produces_warning(FutureWarning):
localized_infer_old = dr.tz_localize(tz, infer_dst=True)
- self.assert_numpy_array_equal(localized, localized_infer_old)
+ self.assert_index_equal(localized, localized_infer_old)
def test_ambiguous_flags(self):
# November 6, 2011, fall back, repeat 2 AM hour
@@ -471,20 +471,20 @@ def test_ambiguous_flags(self):
di = DatetimeIndex(times)
is_dst = [1, 1, 0, 0, 0]
localized = di.tz_localize(tz, ambiguous=is_dst)
- self.assert_numpy_array_equal(dr, localized)
- self.assert_numpy_array_equal(dr, DatetimeIndex(times, tz=tz,
- ambiguous=is_dst))
+ self.assert_index_equal(dr, localized)
+ self.assert_index_equal(dr, DatetimeIndex(times, tz=tz,
+ ambiguous=is_dst))
localized = di.tz_localize(tz, ambiguous=np.array(is_dst))
- self.assert_numpy_array_equal(dr, localized)
+ self.assert_index_equal(dr, localized)
localized = di.tz_localize(tz,
ambiguous=np.array(is_dst).astype('bool'))
- self.assert_numpy_array_equal(dr, localized)
+ self.assert_index_equal(dr, localized)
# Test constructor
localized = DatetimeIndex(times, tz=tz, ambiguous=is_dst)
- self.assert_numpy_array_equal(dr, localized)
+ self.assert_index_equal(dr, localized)
# Test duplicate times where infer_dst fails
times += times
@@ -497,7 +497,7 @@ def test_ambiguous_flags(self):
is_dst = np.hstack((is_dst, is_dst))
localized = di.tz_localize(tz, ambiguous=is_dst)
dr = dr.append(dr)
- self.assert_numpy_array_equal(dr, localized)
+ self.assert_index_equal(dr, localized)
# When there is no dst transition, nothing special happens
dr = date_range(datetime(2011, 6, 1, 0), periods=10,
@@ -505,7 +505,7 @@ def test_ambiguous_flags(self):
is_dst = np.array([1] * 10)
localized = dr.tz_localize(tz)
localized_is_dst = dr.tz_localize(tz, ambiguous=is_dst)
- self.assert_numpy_array_equal(localized, localized_is_dst)
+ self.assert_index_equal(localized, localized_is_dst)
# construction with an ambiguous end-point
# GH 11626
@@ -531,7 +531,10 @@ def test_ambiguous_nat(self):
times = ['11/06/2011 00:00', np.NaN, np.NaN, '11/06/2011 02:00',
'11/06/2011 03:00']
di_test = DatetimeIndex(times, tz='US/Eastern')
- self.assert_numpy_array_equal(di_test, localized)
+
+ # left dtype is datetime64[ns, US/Eastern]
+ # right is datetime64[ns, tzfile('/usr/share/zoneinfo/US/Eastern')]
+ self.assert_numpy_array_equal(di_test.values, localized.values)
def test_nonexistent_raise_coerce(self):
# See issue 13057
@@ -580,7 +583,7 @@ def test_tz_string(self):
tz=self.tzstr('US/Eastern'))
expected = date_range('1/1/2000', periods=10, tz=self.tz('US/Eastern'))
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
def test_take_dont_lose_meta(self):
tm._skip_if_no_pytz()
@@ -673,7 +676,7 @@ def test_convert_tz_aware_datetime_datetime(self):
self.assertTrue(self.cmptz(result.tz, self.tz('US/Eastern')))
converted = to_datetime(dates_aware, utc=True)
- ex_vals = [Timestamp(x).value for x in dates_aware]
+ ex_vals = np.array([Timestamp(x).value for x in dates_aware])
self.assert_numpy_array_equal(converted.asi8, ex_vals)
self.assertIs(converted.tz, pytz.utc)
@@ -779,10 +782,11 @@ def test_date_range_span_dst_transition(self):
self.assertTrue((dr.hour == 0).all())
def test_convert_datetime_list(self):
- dr = date_range('2012-06-02', periods=10, tz=self.tzstr('US/Eastern'))
+ dr = date_range('2012-06-02', periods=10,
+ tz=self.tzstr('US/Eastern'), name='foo')
dr2 = DatetimeIndex(list(dr), name='foo')
- self.assertTrue(dr.equals(dr2))
+ self.assert_index_equal(dr, dr2)
self.assertEqual(dr.tz, dr2.tz)
self.assertEqual(dr2.name, 'foo')
@@ -845,7 +849,7 @@ def test_datetimeindex_tz(self):
idx4 = DatetimeIndex(np.array(arr), tz=self.tzstr('US/Eastern'))
for other in [idx2, idx3, idx4]:
- self.assertTrue(idx1.equals(other))
+ self.assert_index_equal(idx1, other)
def test_datetimeindex_tz_nat(self):
idx = to_datetime([Timestamp("2013-1-1", tz=self.tzstr('US/Eastern')),
@@ -1011,7 +1015,7 @@ def test_tz_localize_naive(self):
conv = rng.tz_localize('US/Pacific')
exp = date_range('1/1/2011', periods=100, freq='H', tz='US/Pacific')
- self.assertTrue(conv.equals(exp))
+ self.assert_index_equal(conv, exp)
def test_tz_localize_roundtrip(self):
for tz in self.timezones:
@@ -1143,7 +1147,7 @@ def test_join_aware(self):
result = test1.join(test2, how='outer')
ex_index = test1.index.union(test2.index)
- self.assertTrue(result.index.equals(ex_index))
+ self.assert_index_equal(result.index, ex_index)
self.assertTrue(result.index.tz.zone == 'US/Central')
# non-overlapping
@@ -1199,11 +1203,11 @@ def test_append_aware_naive(self):
ts1 = Series(np.random.randn(len(rng1)), index=rng1)
ts2 = Series(np.random.randn(len(rng2)), index=rng2)
ts_result = ts1.append(ts2)
+
self.assertTrue(ts_result.index.equals(ts1.index.asobject.append(
ts2.index.asobject)))
# mixed
-
rng1 = date_range('1/1/2011 01:00', periods=1, freq='H')
rng2 = lrange(100)
ts1 = Series(np.random.randn(len(rng1)), index=rng1)
@@ -1280,7 +1284,7 @@ def test_datetimeindex_tz(self):
rng = date_range('03/12/2012 00:00', periods=10, freq='W-FRI',
tz='US/Eastern')
rng2 = DatetimeIndex(data=rng, tz='US/Eastern')
- self.assertTrue(rng.equals(rng2))
+ self.assert_index_equal(rng, rng2)
def test_normalize_tz(self):
rng = date_range('1/1/2000 9:30', periods=10, freq='D',
@@ -1289,7 +1293,7 @@ def test_normalize_tz(self):
result = rng.normalize()
expected = date_range('1/1/2000', periods=10, freq='D',
tz='US/Eastern')
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
self.assertTrue(result.is_normalized)
self.assertFalse(rng.is_normalized)
@@ -1298,7 +1302,7 @@ def test_normalize_tz(self):
result = rng.normalize()
expected = date_range('1/1/2000', periods=10, freq='D', tz='UTC')
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
self.assertTrue(result.is_normalized)
self.assertFalse(rng.is_normalized)
@@ -1307,7 +1311,7 @@ def test_normalize_tz(self):
rng = date_range('1/1/2000 9:30', periods=10, freq='D', tz=tzlocal())
result = rng.normalize()
expected = date_range('1/1/2000', periods=10, freq='D', tz=tzlocal())
- self.assertTrue(result.equals(expected))
+ self.assert_index_equal(result, expected)
self.assertTrue(result.is_normalized)
self.assertFalse(rng.is_normalized)
@@ -1324,45 +1328,45 @@ def test_tzaware_offset(self):
'2010-11-01 07:00'], freq='H', tz=tz)
offset = dates + offsets.Hour(5)
- self.assertTrue(offset.equals(expected))
+ self.assert_index_equal(offset, expected)
offset = dates + np.timedelta64(5, 'h')
- self.assertTrue(offset.equals(expected))
+ self.assert_index_equal(offset, expected)
offset = dates + timedelta(hours=5)
- self.assertTrue(offset.equals(expected))
+ self.assert_index_equal(offset, expected)
def test_nat(self):
# GH 5546
dates = [NaT]
idx = DatetimeIndex(dates)
idx = idx.tz_localize('US/Pacific')
- self.assertTrue(idx.equals(DatetimeIndex(dates, tz='US/Pacific')))
+ self.assert_index_equal(idx, DatetimeIndex(dates, tz='US/Pacific'))
idx = idx.tz_convert('US/Eastern')
- self.assertTrue(idx.equals(DatetimeIndex(dates, tz='US/Eastern')))
+ self.assert_index_equal(idx, DatetimeIndex(dates, tz='US/Eastern'))
idx = idx.tz_convert('UTC')
- self.assertTrue(idx.equals(DatetimeIndex(dates, tz='UTC')))
+ self.assert_index_equal(idx, DatetimeIndex(dates, tz='UTC'))
dates = ['2010-12-01 00:00', '2010-12-02 00:00', NaT]
idx = DatetimeIndex(dates)
idx = idx.tz_localize('US/Pacific')
- self.assertTrue(idx.equals(DatetimeIndex(dates, tz='US/Pacific')))
+ self.assert_index_equal(idx, DatetimeIndex(dates, tz='US/Pacific'))
idx = idx.tz_convert('US/Eastern')
expected = ['2010-12-01 03:00', '2010-12-02 03:00', NaT]
- self.assertTrue(idx.equals(DatetimeIndex(expected, tz='US/Eastern')))
+ self.assert_index_equal(idx, DatetimeIndex(expected, tz='US/Eastern'))
idx = idx + offsets.Hour(5)
expected = ['2010-12-01 08:00', '2010-12-02 08:00', NaT]
- self.assertTrue(idx.equals(DatetimeIndex(expected, tz='US/Eastern')))
+ self.assert_index_equal(idx, DatetimeIndex(expected, tz='US/Eastern'))
idx = idx.tz_convert('US/Pacific')
expected = ['2010-12-01 05:00', '2010-12-02 05:00', NaT]
- self.assertTrue(idx.equals(DatetimeIndex(expected, tz='US/Pacific')))
+ self.assert_index_equal(idx, DatetimeIndex(expected, tz='US/Pacific'))
idx = idx + np.timedelta64(3, 'h')
expected = ['2010-12-01 08:00', '2010-12-02 08:00', NaT]
- self.assertTrue(idx.equals(DatetimeIndex(expected, tz='US/Pacific')))
+ self.assert_index_equal(idx, DatetimeIndex(expected, tz='US/Pacific'))
idx = idx.tz_convert('US/Eastern')
expected = ['2010-12-01 11:00', '2010-12-02 11:00', NaT]
- self.assertTrue(idx.equals(DatetimeIndex(expected, tz='US/Eastern')))
+ self.assert_index_equal(idx, DatetimeIndex(expected, tz='US/Eastern'))
if __name__ == '__main__':
diff --git a/pandas/tseries/tests/test_tslib.py b/pandas/tseries/tests/test_tslib.py
index 8414a5ed42991..d7426daa794c3 100644
--- a/pandas/tseries/tests/test_tslib.py
+++ b/pandas/tseries/tests/test_tslib.py
@@ -812,8 +812,9 @@ def test_parsers_time(self):
self.assert_series_equal(tools.to_time(Series(arg, name="test")),
Series(expected_arr, name="test"))
- self.assert_numpy_array_equal(tools.to_time(np.array(arg)),
- np.array(expected_arr, dtype=np.object_))
+ res = tools.to_time(np.array(arg))
+ self.assertIsInstance(res, list)
+ self.assert_equal(res, expected_arr)
def test_parsers_monthfreq(self):
cases = {'201101': datetime.datetime(2011, 1, 1, 0, 0),
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index e39dc441bcca4..f2b5bf7d2739d 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -31,7 +31,6 @@
from pandas.core.algorithms import take_1d
import pandas.compat as compat
-import pandas.lib as lib
from pandas.compat import(
filter, map, zip, range, unichr, lrange, lmap, lzip, u, callable, Counter,
raise_with_traceback, httplib, is_platform_windows, is_platform_32bit,
@@ -116,25 +115,39 @@ def assertNotAlmostEquals(self, *args, **kwargs):
self.assertNotAlmostEqual)(*args, **kwargs)
-def assert_almost_equal(left, right, check_exact=False, **kwargs):
+def assert_almost_equal(left, right, check_exact=False,
+ check_dtype='equiv', **kwargs):
if isinstance(left, pd.Index):
return assert_index_equal(left, right, check_exact=check_exact,
- **kwargs)
+ exact=check_dtype, **kwargs)
elif isinstance(left, pd.Series):
return assert_series_equal(left, right, check_exact=check_exact,
- **kwargs)
+ check_dtype=check_dtype, **kwargs)
elif isinstance(left, pd.DataFrame):
return assert_frame_equal(left, right, check_exact=check_exact,
- **kwargs)
+ check_dtype=check_dtype, **kwargs)
- return _testing.assert_almost_equal(left, right, **kwargs)
+ else:
+ # other sequences
+ if check_dtype:
+ if is_number(left) and is_number(right):
+ # do not compare numeric classes, like np.float64 and float
+ pass
+ else:
+ if (isinstance(left, np.ndarray) or
+ isinstance(right, np.ndarray)):
+ obj = 'numpy array'
+ else:
+ obj = 'Input'
+ assert_class_equal(left, right, obj=obj)
+ return _testing.assert_almost_equal(left, right,
+ check_dtype=check_dtype, **kwargs)
def assert_dict_equal(left, right, compare_keys=True):
- # instance validation
assertIsInstance(left, dict, '[dict] ')
assertIsInstance(right, dict, '[dict] ')
@@ -966,33 +979,29 @@ def assert_numpy_array_equal(left, right, strict_nan=False,
assertion message
"""
+ # instance validation
+ # to show a detailed erorr message when classes are different
+ assert_class_equal(left, right, obj=obj)
+ # both classes must be an np.ndarray
+ assertIsInstance(left, np.ndarray, '[ndarray] ')
+ assertIsInstance(right, np.ndarray, '[ndarray] ')
+
def _raise(left, right, err_msg):
if err_msg is None:
- # show detailed error
- if lib.isscalar(left) and lib.isscalar(right):
- # show scalar comparison error
- assert_equal(left, right)
- elif is_list_like(left) and is_list_like(right):
- # some test cases pass list
- left = np.asarray(left)
- right = np.array(right)
-
- if left.shape != right.shape:
- raise_assert_detail(obj, '{0} shapes are different'
- .format(obj), left.shape, right.shape)
-
- diff = 0
- for l, r in zip(left, right):
- # count up differences
- if not array_equivalent(l, r, strict_nan=strict_nan):
- diff += 1
-
- diff = diff * 100.0 / left.size
- msg = '{0} values are different ({1} %)'\
- .format(obj, np.round(diff, 5))
- raise_assert_detail(obj, msg, left, right)
- else:
- assert_class_equal(left, right, obj=obj)
+ if left.shape != right.shape:
+ raise_assert_detail(obj, '{0} shapes are different'
+ .format(obj), left.shape, right.shape)
+
+ diff = 0
+ for l, r in zip(left, right):
+ # count up differences
+ if not array_equivalent(l, r, strict_nan=strict_nan):
+ diff += 1
+
+ diff = diff * 100.0 / left.size
+ msg = '{0} values are different ({1} %)'\
+ .format(obj, np.round(diff, 5))
+ raise_assert_detail(obj, msg, left, right)
raise AssertionError(err_msg)
@@ -1076,8 +1085,8 @@ def assert_series_equal(left, right, check_dtype=True,
if check_exact:
assert_numpy_array_equal(left.get_values(), right.get_values(),
- obj='{0}'.format(obj),
- check_dtype=check_dtype)
+ check_dtype=check_dtype,
+ obj='{0}'.format(obj),)
elif check_datetimelike_compat:
# we want to check only if we have compat dtypes
# e.g. integer and M|m are NOT compat, but we can simply check
@@ -1093,7 +1102,7 @@ def assert_series_equal(left, right, check_dtype=True,
msg = '[datetimelike_compat=True] {0} is not equal to {1}.'
raise AssertionError(msg.format(left.values, right.values))
else:
- assert_numpy_array_equal(left.values, right.values,
+ assert_numpy_array_equal(left.get_values(), right.get_values(),
check_dtype=check_dtype)
else:
_testing.assert_almost_equal(left.get_values(), right.get_values(),
@@ -1314,11 +1323,7 @@ def assert_sp_array_equal(left, right):
raise_assert_detail('SparseArray.index', 'index are not equal',
left.sp_index, right.sp_index)
- if np.isnan(left.fill_value):
- assert (np.isnan(right.fill_value))
- else:
- assert (left.fill_value == right.fill_value)
-
+ assert_attr_equal('fill_value', left, right)
assert_attr_equal('dtype', left, right)
assert_numpy_array_equal(left.values, right.values)
| - [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
2 changes to make tests more strict:
- `assert_numpy_array_equal` now checks input is `np.ndarray`
- `assert_almost_equal` now checks inputs are the same class.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13311 | 2016-05-28T03:59:42Z | 2016-05-28T17:26:17Z | null | 2016-05-28T22:17:13Z |
DOC: remove references to deprecated numpy negation method | diff --git a/pandas/core/common.py b/pandas/core/common.py
index 1be6ce810791b..875c5f8a2a707 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -142,7 +142,7 @@ def _isnull_old(obj):
def _use_inf_as_null(key):
"""Option change callback for null/inf behaviour
- Choose which replacement for numpy.isnan / -numpy.isfinite is used.
+ Choose which replacement for numpy.isnan / ~numpy.isfinite is used.
Parameters
----------
@@ -233,7 +233,7 @@ def _isnull_ndarraylike_old(obj):
def notnull(obj):
- """Replacement for numpy.isfinite / -numpy.isnan which is suitable for use
+ """Replacement for numpy.isfinite / ~numpy.isnan which is suitable for use
on object arrays.
Parameters
@@ -1115,7 +1115,7 @@ def _possibly_cast_to_datetime(value, dtype, errors='raise'):
def _possibly_infer_to_datetimelike(value, convert_dates=False):
"""
- we might have a array (or single object) that is datetime like,
+ we might have an array (or single object) that is datetime like,
and no dtype is passed don't change the value unless we find a
datetime/timedelta set
| - [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/13310 | 2016-05-28T03:17:01Z | 2016-05-28T17:30:44Z | null | 2016-05-28T17:30:48Z |
ENH: Respect Key Ordering for OrderedDict List in DataFrame Init | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 2b67aca1dcf74..6102c5f41300f 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -87,6 +87,7 @@ Other enhancements
- ``Categorical.astype()`` now accepts an optional boolean argument ``copy``, effective when dtype is categorical (:issue:`13209`)
- Consistent with the Python API, ``pd.read_csv()`` will now interpret ``+inf`` as positive infinity (:issue:`13274`)
+- The ``DataFrame`` constructor will now respect key ordering if a list of ``OrderedDict`` objects are passed in (:issue:`13304`)
- ``pd.read_html()`` has gained support for the ``decimal`` option (:issue:`12907`)
- ``eval``'s upcasting rules for ``float32`` types have been updated to be more consistent with NumPy's rules. New behavior will not upcast to ``float64`` if you multiply a pandas ``float32`` object by a scalar float64. (:issue:`12388`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 2c8106571f198..69def7502a6f7 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -5537,7 +5537,8 @@ def _list_of_series_to_arrays(data, columns, coerce_float=False, dtype=None):
def _list_of_dict_to_arrays(data, columns, coerce_float=False, dtype=None):
if columns is None:
gen = (list(x.keys()) for x in data)
- columns = lib.fast_unique_multiple_list_gen(gen)
+ sort = not any(isinstance(d, OrderedDict) for d in data)
+ columns = lib.fast_unique_multiple_list_gen(gen, sort=sort)
# assure that they are of the base dict class and not of derived
# classes
diff --git a/pandas/lib.pyx b/pandas/lib.pyx
index 328166168a3fc..a9c7f93097f1b 100644
--- a/pandas/lib.pyx
+++ b/pandas/lib.pyx
@@ -493,7 +493,21 @@ def fast_unique_multiple_list(list lists):
@cython.wraparound(False)
@cython.boundscheck(False)
-def fast_unique_multiple_list_gen(object gen):
+def fast_unique_multiple_list_gen(object gen, bint sort=True):
+ """
+ Generate a list of unique values from a generator of lists.
+
+ Parameters
+ ----------
+ gen : generator object
+ A generator of lists from which the unique list is created
+ sort : boolean
+ Whether or not to sort the resulting unique list
+
+ Returns
+ -------
+ unique_list : list of unique values
+ """
cdef:
list buf
Py_ssize_t j, n
@@ -508,11 +522,11 @@ def fast_unique_multiple_list_gen(object gen):
if val not in table:
table[val] = stub
uniques.append(val)
-
- try:
- uniques.sort()
- except Exception:
- pass
+ if sort:
+ try:
+ uniques.sort()
+ except Exception:
+ pass
return uniques
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index a050d74f0fc51..b42aef9447373 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -891,6 +891,45 @@ def test_constructor_list_of_dicts(self):
expected = DataFrame(index=[0])
tm.assert_frame_equal(result, expected)
+ def test_constructor_ordered_dict_preserve_order(self):
+ # see gh-13304
+ expected = DataFrame([[2, 1]], columns=['b', 'a'])
+
+ data = OrderedDict()
+ data['b'] = [2]
+ data['a'] = [1]
+
+ result = DataFrame(data)
+ tm.assert_frame_equal(result, expected)
+
+ data = OrderedDict()
+ data['b'] = 2
+ data['a'] = 1
+
+ result = DataFrame([data])
+ tm.assert_frame_equal(result, expected)
+
+ def test_constructor_ordered_dict_conflicting_orders(self):
+ # the first dict element sets the ordering for the DataFrame,
+ # even if there are conflicting orders from subsequent ones
+ row_one = OrderedDict()
+ row_one['b'] = 2
+ row_one['a'] = 1
+
+ row_two = OrderedDict()
+ row_two['a'] = 1
+ row_two['b'] = 2
+
+ row_three = {'b': 2, 'a': 1}
+
+ expected = DataFrame([[2, 1], [2, 1]], columns=['b', 'a'])
+ result = DataFrame([row_one, row_two])
+ tm.assert_frame_equal(result, expected)
+
+ expected = DataFrame([[2, 1], [2, 1], [2, 1]], columns=['b', 'a'])
+ result = DataFrame([row_one, row_two, row_three])
+ tm.assert_frame_equal(result, expected)
+
def test_constructor_list_of_series(self):
data = [OrderedDict([['a', 1.5], ['b', 3.0], ['c', 4.0]]),
OrderedDict([['a', 1.5], ['b', 3.0], ['c', 6.0]])]
@@ -1870,3 +1909,9 @@ def test_from_index(self):
tm.assert_series_equal(df2[0], Series(idx2, name=0))
df2 = DataFrame(Series(idx2))
tm.assert_series_equal(df2[0], Series(idx2, name=0))
+
+if __name__ == '__main__':
+ import nose # noqa
+
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
+ exit=False)
diff --git a/pandas/tests/test_lib.py b/pandas/tests/test_lib.py
index bfac0aa83b434..10a6bb5c75b01 100644
--- a/pandas/tests/test_lib.py
+++ b/pandas/tests/test_lib.py
@@ -24,6 +24,19 @@ def test_max_len_string_array(self):
tm.assertRaises(TypeError,
lambda: lib.max_len_string_array(arr.astype('U')))
+ def test_fast_unique_multiple_list_gen_sort(self):
+ keys = [['p', 'a'], ['n', 'd'], ['a', 's']]
+
+ gen = (key for key in keys)
+ expected = np.array(['a', 'd', 'n', 'p', 's'])
+ out = lib.fast_unique_multiple_list_gen(gen, sort=True)
+ tm.assert_numpy_array_equal(np.array(out), expected)
+
+ gen = (key for key in keys)
+ expected = np.array(['p', 'a', 'n', 'd', 's'])
+ out = lib.fast_unique_multiple_list_gen(gen, sort=False)
+ tm.assert_numpy_array_equal(np.array(out), expected)
+
class TestIndexing(tm.TestCase):
| Title is self-explanatory. Closes #13304.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13309 | 2016-05-28T00:50:03Z | 2016-05-31T14:31:03Z | null | 2016-05-31T17:37:38Z |
Fix series comparison operators when dealing with zero rank numpy arrays | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 3fc1a69cb600e..2f718d3084a1e 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -101,7 +101,7 @@ API changes
- Non-convertible dates in an excel date column will be returned without conversion and the column will be ``object`` dtype, rather than raising an exception (:issue:`10001`)
-- An ``UnsupportedFunctionCall`` error is now raised if numpy ufuncs like ``np.mean`` are called on groupby or resample objects (:issue:`12811`)
+- An ``UnsupportedFunctionCall`` error is now raised if NumPy ufuncs like ``np.mean`` are called on groupby or resample objects (:issue:`12811`)
- Calls to ``.sample()`` will respect the random seed set via ``numpy.random.seed(n)`` (:issue:`13161`)
.. _whatsnew_0182.api.tolist:
@@ -365,6 +365,7 @@ Bug Fixes
- Bug in ``.unstack`` with ``Categorical`` dtype resets ``.ordered`` to ``True`` (:issue:`13249`)
+- Bug in ``Series`` comparison operators when dealing with zero rank NumPy arrays (:issue:`13006`)
- Bug in ``groupby`` where ``apply`` returns different result depending on whether first result is ``None`` or not (:issue:`12824`)
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index d1bb67fa0bc13..f647f7ecf4ec7 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -754,7 +754,9 @@ def wrapper(self, other, axis=None):
elif isinstance(other, pd.DataFrame): # pragma: no cover
return NotImplemented
elif isinstance(other, (np.ndarray, pd.Index)):
- if len(self) != len(other):
+ # do not check length of zerorank array
+ if not lib.isscalar(lib.item_from_zerodim(other)) and \
+ len(self) != len(other):
raise ValueError('Lengths must match to compare')
return self._constructor(na_op(self.values, np.asarray(other)),
index=self.index).__finalize__(self)
diff --git a/pandas/tests/series/test_operators.py b/pandas/tests/series/test_operators.py
index 3588faa8b42f1..1e23c87fdb4ca 100644
--- a/pandas/tests/series/test_operators.py
+++ b/pandas/tests/series/test_operators.py
@@ -264,6 +264,18 @@ def test_operators_timedelta64(self):
rs[2] += np.timedelta64(timedelta(minutes=5, seconds=1))
self.assertEqual(rs[2], value)
+ def test_operator_series_comparison_zerorank(self):
+ # GH 13006
+ result = np.float64(0) > pd.Series([1, 2, 3])
+ expected = 0.0 > pd.Series([1, 2, 3])
+ self.assert_series_equal(result, expected)
+ result = pd.Series([1, 2, 3]) < np.float64(0)
+ expected = pd.Series([1, 2, 3]) < 0.0
+ self.assert_series_equal(result, expected)
+ result = np.array([0, 1, 2])[0] > pd.Series([0, 1, 2])
+ expected = 0.0 > pd.Series([1, 2, 3])
+ self.assert_series_equal(result, expected)
+
def test_timedeltas_with_DateOffset(self):
# GH 4532
| - [x] closes #13006
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/13307 | 2016-05-27T22:48:21Z | 2016-06-03T15:04:18Z | null | 2016-06-03T15:04:28Z |
TST: reorg datetime with tz tests a bit | diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 1d043297aa1fa..6913df765862d 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -17,7 +17,7 @@
from pandas.compat import (lmap, long, zip, range, lrange, lzip,
OrderedDict, is_platform_little_endian)
from pandas import compat
-from pandas import (DataFrame, Index, Series, notnull, isnull,
+from pandas import (DataFrame, Index, Series, isnull,
MultiIndex, Timedelta, Timestamp,
date_range)
from pandas.core.common import PandasError
@@ -25,8 +25,6 @@
import pandas.core.common as com
import pandas.lib as lib
-from pandas.types.api import DatetimeTZDtype
-
from pandas.util.testing import (assert_numpy_array_equal,
assert_series_equal,
assert_frame_equal,
@@ -1329,185 +1327,6 @@ def test_constructor_with_datetimes(self):
.reset_index(drop=True), 'b': i_no_tz})
assert_frame_equal(df, expected)
- def test_constructor_with_datetime_tz(self):
-
- # 8260
- # support datetime64 with tz
-
- idx = Index(date_range('20130101', periods=3, tz='US/Eastern'),
- name='foo')
- dr = date_range('20130110', periods=3)
-
- # construction
- df = DataFrame({'A': idx, 'B': dr})
- self.assertTrue(df['A'].dtype, 'M8[ns, US/Eastern')
- self.assertTrue(df['A'].name == 'A')
- assert_series_equal(df['A'], Series(idx, name='A'))
- assert_series_equal(df['B'], Series(dr, name='B'))
-
- # construction from dict
- df2 = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'),
- B=Timestamp('20130603', tz='CET')),
- index=range(5))
- assert_series_equal(df2.dtypes, Series(['datetime64[ns, US/Eastern]',
- 'datetime64[ns, CET]'],
- index=['A', 'B']))
-
- # dtypes
- tzframe = DataFrame({'A': date_range('20130101', periods=3),
- 'B': date_range('20130101', periods=3,
- tz='US/Eastern'),
- 'C': date_range('20130101', periods=3, tz='CET')})
- tzframe.iloc[1, 1] = pd.NaT
- tzframe.iloc[1, 2] = pd.NaT
- result = tzframe.dtypes.sort_index()
- expected = Series([np.dtype('datetime64[ns]'),
- DatetimeTZDtype('datetime64[ns, US/Eastern]'),
- DatetimeTZDtype('datetime64[ns, CET]')],
- ['A', 'B', 'C'])
-
- # concat
- df3 = pd.concat([df2.A.to_frame(), df2.B.to_frame()], axis=1)
- assert_frame_equal(df2, df3)
-
- # select_dtypes
- result = df3.select_dtypes(include=['datetime64[ns]'])
- expected = df3.reindex(columns=[])
- assert_frame_equal(result, expected)
-
- # this will select based on issubclass, and these are the same class
- result = df3.select_dtypes(include=['datetime64[ns, CET]'])
- expected = df3
- assert_frame_equal(result, expected)
-
- # from index
- idx2 = date_range('20130101', periods=3, tz='US/Eastern', name='foo')
- df2 = DataFrame(idx2)
- assert_series_equal(df2['foo'], Series(idx2, name='foo'))
- df2 = DataFrame(Series(idx2))
- assert_series_equal(df2['foo'], Series(idx2, name='foo'))
-
- idx2 = date_range('20130101', periods=3, tz='US/Eastern')
- df2 = DataFrame(idx2)
- assert_series_equal(df2[0], Series(idx2, name=0))
- df2 = DataFrame(Series(idx2))
- assert_series_equal(df2[0], Series(idx2, name=0))
-
- # interleave with object
- result = self.tzframe.assign(D='foo').values
- expected = np.array([[Timestamp('2013-01-01 00:00:00'),
- Timestamp('2013-01-02 00:00:00'),
- Timestamp('2013-01-03 00:00:00')],
- [Timestamp('2013-01-01 00:00:00-0500',
- tz='US/Eastern'),
- pd.NaT,
- Timestamp('2013-01-03 00:00:00-0500',
- tz='US/Eastern')],
- [Timestamp('2013-01-01 00:00:00+0100', tz='CET'),
- pd.NaT,
- Timestamp('2013-01-03 00:00:00+0100', tz='CET')],
- ['foo', 'foo', 'foo']], dtype=object).T
- self.assert_numpy_array_equal(result, expected)
-
- # interleave with only datetime64[ns]
- result = self.tzframe.values
- expected = np.array([[Timestamp('2013-01-01 00:00:00'),
- Timestamp('2013-01-02 00:00:00'),
- Timestamp('2013-01-03 00:00:00')],
- [Timestamp('2013-01-01 00:00:00-0500',
- tz='US/Eastern'),
- pd.NaT,
- Timestamp('2013-01-03 00:00:00-0500',
- tz='US/Eastern')],
- [Timestamp('2013-01-01 00:00:00+0100', tz='CET'),
- pd.NaT,
- Timestamp('2013-01-03 00:00:00+0100',
- tz='CET')]], dtype=object).T
- self.assert_numpy_array_equal(result, expected)
-
- # astype
- expected = np.array([[Timestamp('2013-01-01 00:00:00'),
- Timestamp('2013-01-02 00:00:00'),
- Timestamp('2013-01-03 00:00:00')],
- [Timestamp('2013-01-01 00:00:00-0500',
- tz='US/Eastern'),
- pd.NaT,
- Timestamp('2013-01-03 00:00:00-0500',
- tz='US/Eastern')],
- [Timestamp('2013-01-01 00:00:00+0100', tz='CET'),
- pd.NaT,
- Timestamp('2013-01-03 00:00:00+0100',
- tz='CET')]],
- dtype=object).T
- result = self.tzframe.astype(object)
- assert_frame_equal(result, DataFrame(
- expected, index=self.tzframe.index, columns=self.tzframe.columns))
-
- result = self.tzframe.astype('datetime64[ns]')
- expected = DataFrame({'A': date_range('20130101', periods=3),
- 'B': (date_range('20130101', periods=3,
- tz='US/Eastern')
- .tz_convert('UTC')
- .tz_localize(None)),
- 'C': (date_range('20130101', periods=3,
- tz='CET')
- .tz_convert('UTC')
- .tz_localize(None))})
- expected.iloc[1, 1] = pd.NaT
- expected.iloc[1, 2] = pd.NaT
- assert_frame_equal(result, expected)
-
- # str formatting
- result = self.tzframe.astype(str)
- expected = np.array([['2013-01-01', '2013-01-01 00:00:00-05:00',
- '2013-01-01 00:00:00+01:00'],
- ['2013-01-02', 'NaT', 'NaT'],
- ['2013-01-03', '2013-01-03 00:00:00-05:00',
- '2013-01-03 00:00:00+01:00']], dtype=object)
- self.assert_numpy_array_equal(result, expected)
-
- result = str(self.tzframe)
- self.assertTrue('0 2013-01-01 2013-01-01 00:00:00-05:00 '
- '2013-01-01 00:00:00+01:00' in result)
- self.assertTrue('1 2013-01-02 '
- 'NaT NaT' in result)
- self.assertTrue('2 2013-01-03 2013-01-03 00:00:00-05:00 '
- '2013-01-03 00:00:00+01:00' in result)
-
- # setitem
- df['C'] = idx
- assert_series_equal(df['C'], Series(idx, name='C'))
-
- df['D'] = 'foo'
- df['D'] = idx
- assert_series_equal(df['D'], Series(idx, name='D'))
- del df['D']
-
- # assert that A & C are not sharing the same base (e.g. they
- # are copies)
- b1 = df._data.blocks[1]
- b2 = df._data.blocks[2]
- self.assertTrue(b1.values.equals(b2.values))
- self.assertFalse(id(b1.values.values.base) ==
- id(b2.values.values.base))
-
- # with nan
- df2 = df.copy()
- df2.iloc[1, 1] = pd.NaT
- df2.iloc[1, 2] = pd.NaT
- result = df2['B']
- assert_series_equal(notnull(result), Series(
- [True, False, True], name='B'))
- assert_series_equal(df2.dtypes, df.dtypes)
-
- # set/reset
- df = DataFrame({'A': [0, 1, 2]}, index=idx)
- result = df.reset_index()
- self.assertTrue(result['foo'].dtype, 'M8[ns, US/Eastern')
-
- result = result.set_index('foo')
- tm.assert_index_equal(df.index, idx)
-
def test_constructor_for_list_with_dtypes(self):
# TODO(wesm): unused
intname = np.dtype(np.int_).name # noqa
@@ -2018,3 +1837,39 @@ def test_from_records_len0_with_columns(self):
self.assertTrue(np.array_equal(result.columns, ['bar']))
self.assertEqual(len(result), 0)
self.assertEqual(result.index.name, 'foo')
+
+
+class TestDataFrameConstructorWithDatetimeTZ(tm.TestCase, TestData):
+
+ _multiprocess_can_split_ = True
+
+ def test_from_dict(self):
+
+ # 8260
+ # support datetime64 with tz
+
+ idx = Index(date_range('20130101', periods=3, tz='US/Eastern'),
+ name='foo')
+ dr = date_range('20130110', periods=3)
+
+ # construction
+ df = DataFrame({'A': idx, 'B': dr})
+ self.assertTrue(df['A'].dtype, 'M8[ns, US/Eastern')
+ self.assertTrue(df['A'].name == 'A')
+ assert_series_equal(df['A'], Series(idx, name='A'))
+ assert_series_equal(df['B'], Series(dr, name='B'))
+
+ def test_from_index(self):
+
+ # from index
+ idx2 = date_range('20130101', periods=3, tz='US/Eastern', name='foo')
+ df2 = DataFrame(idx2)
+ assert_series_equal(df2['foo'], Series(idx2, name='foo'))
+ df2 = DataFrame(Series(idx2))
+ assert_series_equal(df2['foo'], Series(idx2, name='foo'))
+
+ idx2 = date_range('20130101', periods=3, tz='US/Eastern')
+ df2 = DataFrame(idx2)
+ assert_series_equal(df2[0], Series(idx2, name=0))
+ df2 = DataFrame(Series(idx2))
+ assert_series_equal(df2[0], Series(idx2, name=0))
diff --git a/pandas/tests/frame/test_dtypes.py b/pandas/tests/frame/test_dtypes.py
index 97ca8238b78f9..064230bde791a 100644
--- a/pandas/tests/frame/test_dtypes.py
+++ b/pandas/tests/frame/test_dtypes.py
@@ -9,6 +9,7 @@
from pandas import (DataFrame, Series, date_range, Timedelta, Timestamp,
compat, option_context)
from pandas.compat import u
+from pandas.core import common as com
from pandas.tests.frame.common import TestData
from pandas.util.testing import (assert_series_equal,
assert_frame_equal,
@@ -74,6 +75,21 @@ def test_empty_frame_dtypes_ftypes(self):
assert_series_equal(df[:0].dtypes, ex_dtypes)
assert_series_equal(df[:0].ftypes, ex_ftypes)
+ def test_datetime_with_tz_dtypes(self):
+ tzframe = DataFrame({'A': date_range('20130101', periods=3),
+ 'B': date_range('20130101', periods=3,
+ tz='US/Eastern'),
+ 'C': date_range('20130101', periods=3, tz='CET')})
+ tzframe.iloc[1, 1] = pd.NaT
+ tzframe.iloc[1, 2] = pd.NaT
+ result = tzframe.dtypes.sort_index()
+ expected = Series([np.dtype('datetime64[ns]'),
+ com.DatetimeTZDtype('datetime64[ns, US/Eastern]'),
+ com.DatetimeTZDtype('datetime64[ns, CET]')],
+ ['A', 'B', 'C'])
+
+ assert_series_equal(result, expected)
+
def test_dtypes_are_correct_after_column_slice(self):
# GH6525
df = pd.DataFrame(index=range(5), columns=list("abc"), dtype=np.float_)
@@ -178,6 +194,16 @@ def test_select_dtypes_bad_datetime64(self):
with tm.assertRaisesRegexp(ValueError, '.+ is too specific'):
df.select_dtypes(exclude=['datetime64[as]'])
+ def test_select_dtypes_datetime_with_tz(self):
+
+ df2 = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'),
+ B=Timestamp('20130603', tz='CET')),
+ index=range(5))
+ df3 = pd.concat([df2.A.to_frame(), df2.B.to_frame()], axis=1)
+ result = df3.select_dtypes(include=['datetime64[ns]'])
+ expected = df3.reindex(columns=[])
+ assert_frame_equal(result, expected)
+
def test_select_dtypes_str_raises(self):
df = DataFrame({'a': list('abc'),
'g': list(u('abc')),
@@ -394,3 +420,93 @@ def test_timedeltas(self):
'int64': 1}).sort_values()
result = df.get_dtype_counts().sort_values()
assert_series_equal(result, expected)
+
+
+class TestDataFrameDatetimeWithTZ(tm.TestCase, TestData):
+
+ _multiprocess_can_split_ = True
+
+ def test_interleave(self):
+
+ # interleave with object
+ result = self.tzframe.assign(D='foo').values
+ expected = np.array([[Timestamp('2013-01-01 00:00:00'),
+ Timestamp('2013-01-02 00:00:00'),
+ Timestamp('2013-01-03 00:00:00')],
+ [Timestamp('2013-01-01 00:00:00-0500',
+ tz='US/Eastern'),
+ pd.NaT,
+ Timestamp('2013-01-03 00:00:00-0500',
+ tz='US/Eastern')],
+ [Timestamp('2013-01-01 00:00:00+0100', tz='CET'),
+ pd.NaT,
+ Timestamp('2013-01-03 00:00:00+0100', tz='CET')],
+ ['foo', 'foo', 'foo']], dtype=object).T
+ self.assert_numpy_array_equal(result, expected)
+
+ # interleave with only datetime64[ns]
+ result = self.tzframe.values
+ expected = np.array([[Timestamp('2013-01-01 00:00:00'),
+ Timestamp('2013-01-02 00:00:00'),
+ Timestamp('2013-01-03 00:00:00')],
+ [Timestamp('2013-01-01 00:00:00-0500',
+ tz='US/Eastern'),
+ pd.NaT,
+ Timestamp('2013-01-03 00:00:00-0500',
+ tz='US/Eastern')],
+ [Timestamp('2013-01-01 00:00:00+0100', tz='CET'),
+ pd.NaT,
+ Timestamp('2013-01-03 00:00:00+0100',
+ tz='CET')]], dtype=object).T
+ self.assert_numpy_array_equal(result, expected)
+
+ def test_astype(self):
+ # astype
+ expected = np.array([[Timestamp('2013-01-01 00:00:00'),
+ Timestamp('2013-01-02 00:00:00'),
+ Timestamp('2013-01-03 00:00:00')],
+ [Timestamp('2013-01-01 00:00:00-0500',
+ tz='US/Eastern'),
+ pd.NaT,
+ Timestamp('2013-01-03 00:00:00-0500',
+ tz='US/Eastern')],
+ [Timestamp('2013-01-01 00:00:00+0100', tz='CET'),
+ pd.NaT,
+ Timestamp('2013-01-03 00:00:00+0100',
+ tz='CET')]],
+ dtype=object).T
+ result = self.tzframe.astype(object)
+ assert_frame_equal(result, DataFrame(
+ expected, index=self.tzframe.index, columns=self.tzframe.columns))
+
+ result = self.tzframe.astype('datetime64[ns]')
+ expected = DataFrame({'A': date_range('20130101', periods=3),
+ 'B': (date_range('20130101', periods=3,
+ tz='US/Eastern')
+ .tz_convert('UTC')
+ .tz_localize(None)),
+ 'C': (date_range('20130101', periods=3,
+ tz='CET')
+ .tz_convert('UTC')
+ .tz_localize(None))})
+ expected.iloc[1, 1] = pd.NaT
+ expected.iloc[1, 2] = pd.NaT
+ assert_frame_equal(result, expected)
+
+ def test_astype_str(self):
+ # str formatting
+ result = self.tzframe.astype(str)
+ expected = np.array([['2013-01-01', '2013-01-01 00:00:00-05:00',
+ '2013-01-01 00:00:00+01:00'],
+ ['2013-01-02', 'NaT', 'NaT'],
+ ['2013-01-03', '2013-01-03 00:00:00-05:00',
+ '2013-01-03 00:00:00+01:00']], dtype=object)
+ self.assert_numpy_array_equal(result, expected)
+
+ result = str(self.tzframe)
+ self.assertTrue('0 2013-01-01 2013-01-01 00:00:00-05:00 '
+ '2013-01-01 00:00:00+01:00' in result)
+ self.assertTrue('1 2013-01-02 '
+ 'NaT NaT' in result)
+ self.assertTrue('2 2013-01-03 2013-01-03 00:00:00-05:00 '
+ '2013-01-03 00:00:00+01:00' in result)
diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py
index ca1ebe477e903..fc8456cb59840 100644
--- a/pandas/tests/frame/test_indexing.py
+++ b/pandas/tests/frame/test_indexing.py
@@ -2699,3 +2699,64 @@ def test_type_error_multiindex(self):
result = dg['x', 0]
assert_series_equal(result, expected)
+
+
+class TestDataFrameIndexingDatetimeWithTZ(tm.TestCase, TestData):
+
+ _multiprocess_can_split_ = True
+
+ def setUp(self):
+ self.idx = Index(date_range('20130101', periods=3, tz='US/Eastern'),
+ name='foo')
+ self.dr = date_range('20130110', periods=3)
+ self.df = DataFrame({'A': self.idx, 'B': self.dr})
+
+ def test_setitem(self):
+
+ df = self.df
+ idx = self.idx
+
+ # setitem
+ df['C'] = idx
+ assert_series_equal(df['C'], Series(idx, name='C'))
+
+ df['D'] = 'foo'
+ df['D'] = idx
+ assert_series_equal(df['D'], Series(idx, name='D'))
+ del df['D']
+
+ # assert that A & C are not sharing the same base (e.g. they
+ # are copies)
+ b1 = df._data.blocks[1]
+ b2 = df._data.blocks[2]
+ self.assertTrue(b1.values.equals(b2.values))
+ self.assertFalse(id(b1.values.values.base) ==
+ id(b2.values.values.base))
+
+ # with nan
+ df2 = df.copy()
+ df2.iloc[1, 1] = pd.NaT
+ df2.iloc[1, 2] = pd.NaT
+ result = df2['B']
+ assert_series_equal(notnull(result), Series(
+ [True, False, True], name='B'))
+ assert_series_equal(df2.dtypes, df.dtypes)
+
+ def test_set_reset(self):
+
+ idx = self.idx
+
+ # set/reset
+ df = DataFrame({'A': [0, 1, 2]}, index=idx)
+ result = df.reset_index()
+ self.assertTrue(result['foo'].dtype, 'M8[ns, US/Eastern')
+
+ result = result.set_index('foo')
+ tm.assert_index_equal(df.index, idx)
+
+ def test_transpose(self):
+
+ result = self.df.T
+ expected = DataFrame(self.df.values.T)
+ expected.index = ['A', 'B']
+ assert_frame_equal(result, expected)
diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py
index 474ce0f899217..9430975d76475 100644
--- a/pandas/tools/tests/test_merge.py
+++ b/pandas/tools/tests/test_merge.py
@@ -1136,6 +1136,15 @@ def test_concat_NaT_series(self):
result = pd.concat([x, y], ignore_index=True)
tm.assert_series_equal(result, expected)
+ def test_concat_tz_frame(self):
+ df2 = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'),
+ B=Timestamp('20130603', tz='CET')),
+ index=range(5))
+
+ # concat
+ df3 = pd.concat([df2.A.to_frame(), df2.B.to_frame()], axis=1)
+ assert_frame_equal(df2, df3)
+
def test_concat_tz_series(self):
# GH 11755
# tz and no tz
| https://api.github.com/repos/pandas-dev/pandas/pulls/13301 | 2016-05-26T22:24:50Z | 2016-05-26T23:44:04Z | null | 2016-05-26T23:44:04Z |
|
TST: split up test_merge | diff --git a/ci/requirements-3.4.run b/ci/requirements-3.4.run
index 7d4cdcd21595a..3e12adae7dd9f 100644
--- a/ci/requirements-3.4.run
+++ b/ci/requirements-3.4.run
@@ -1,4 +1,4 @@
-pytz
+pytz=2015.7
numpy=1.8.1
openpyxl
xlsxwriter
diff --git a/pandas/tools/tests/test_concat.py b/pandas/tools/tests/test_concat.py
new file mode 100644
index 0000000000000..62bd12130ca53
--- /dev/null
+++ b/pandas/tools/tests/test_concat.py
@@ -0,0 +1,1035 @@
+import nose
+
+import numpy as np
+from numpy.random import randn
+
+from datetime import datetime
+from pandas.compat import StringIO
+import pandas as pd
+from pandas import (DataFrame, concat,
+ read_csv, isnull, Series, date_range,
+ Index, Panel, MultiIndex, Timestamp,
+ DatetimeIndex)
+from pandas.util import testing as tm
+from pandas.util.testing import (assert_frame_equal,
+ makeCustomDataframe as mkdf,
+ assert_almost_equal)
+
+
+class TestConcatenate(tm.TestCase):
+
+ _multiprocess_can_split_ = True
+
+ def setUp(self):
+ self.frame = DataFrame(tm.getSeriesData())
+ self.mixed_frame = self.frame.copy()
+ self.mixed_frame['foo'] = 'bar'
+
+ def test_append(self):
+ begin_index = self.frame.index[:5]
+ end_index = self.frame.index[5:]
+
+ begin_frame = self.frame.reindex(begin_index)
+ end_frame = self.frame.reindex(end_index)
+
+ appended = begin_frame.append(end_frame)
+ assert_almost_equal(appended['A'], self.frame['A'])
+
+ del end_frame['A']
+ partial_appended = begin_frame.append(end_frame)
+ self.assertIn('A', partial_appended)
+
+ partial_appended = end_frame.append(begin_frame)
+ self.assertIn('A', partial_appended)
+
+ # mixed type handling
+ appended = self.mixed_frame[:5].append(self.mixed_frame[5:])
+ assert_frame_equal(appended, self.mixed_frame)
+
+ # what to test here
+ mixed_appended = self.mixed_frame[:5].append(self.frame[5:])
+ mixed_appended2 = self.frame[:5].append(self.mixed_frame[5:])
+
+ # all equal except 'foo' column
+ assert_frame_equal(
+ mixed_appended.reindex(columns=['A', 'B', 'C', 'D']),
+ mixed_appended2.reindex(columns=['A', 'B', 'C', 'D']))
+
+ # append empty
+ empty = DataFrame({})
+
+ appended = self.frame.append(empty)
+ assert_frame_equal(self.frame, appended)
+ self.assertIsNot(appended, self.frame)
+
+ appended = empty.append(self.frame)
+ assert_frame_equal(self.frame, appended)
+ self.assertIsNot(appended, self.frame)
+
+ # overlap
+ self.assertRaises(ValueError, self.frame.append, self.frame,
+ verify_integrity=True)
+
+ # new columns
+ # GH 6129
+ df = DataFrame({'a': {'x': 1, 'y': 2}, 'b': {'x': 3, 'y': 4}})
+ row = Series([5, 6, 7], index=['a', 'b', 'c'], name='z')
+ expected = DataFrame({'a': {'x': 1, 'y': 2, 'z': 5}, 'b': {
+ 'x': 3, 'y': 4, 'z': 6}, 'c': {'z': 7}})
+ result = df.append(row)
+ assert_frame_equal(result, expected)
+
+ def test_append_length0_frame(self):
+ df = DataFrame(columns=['A', 'B', 'C'])
+ df3 = DataFrame(index=[0, 1], columns=['A', 'B'])
+ df5 = df.append(df3)
+
+ expected = DataFrame(index=[0, 1], columns=['A', 'B', 'C'])
+ assert_frame_equal(df5, expected)
+
+ def test_append_records(self):
+ arr1 = np.zeros((2,), dtype=('i4,f4,a10'))
+ arr1[:] = [(1, 2., 'Hello'), (2, 3., "World")]
+
+ arr2 = np.zeros((3,), dtype=('i4,f4,a10'))
+ arr2[:] = [(3, 4., 'foo'),
+ (5, 6., "bar"),
+ (7., 8., 'baz')]
+
+ df1 = DataFrame(arr1)
+ df2 = DataFrame(arr2)
+
+ result = df1.append(df2, ignore_index=True)
+ expected = DataFrame(np.concatenate((arr1, arr2)))
+ assert_frame_equal(result, expected)
+
+ def test_append_different_columns(self):
+ df = DataFrame({'bools': np.random.randn(10) > 0,
+ 'ints': np.random.randint(0, 10, 10),
+ 'floats': np.random.randn(10),
+ 'strings': ['foo', 'bar'] * 5})
+
+ a = df[:5].ix[:, ['bools', 'ints', 'floats']]
+ b = df[5:].ix[:, ['strings', 'ints', 'floats']]
+
+ appended = a.append(b)
+ self.assertTrue(isnull(appended['strings'][0:4]).all())
+ self.assertTrue(isnull(appended['bools'][5:]).all())
+
+ def test_append_many(self):
+ chunks = [self.frame[:5], self.frame[5:10],
+ self.frame[10:15], self.frame[15:]]
+
+ result = chunks[0].append(chunks[1:])
+ tm.assert_frame_equal(result, self.frame)
+
+ chunks[-1] = chunks[-1].copy()
+ chunks[-1]['foo'] = 'bar'
+ result = chunks[0].append(chunks[1:])
+ tm.assert_frame_equal(result.ix[:, self.frame.columns], self.frame)
+ self.assertTrue((result['foo'][15:] == 'bar').all())
+ self.assertTrue(result['foo'][:15].isnull().all())
+
+ def test_append_preserve_index_name(self):
+ # #980
+ df1 = DataFrame(data=None, columns=['A', 'B', 'C'])
+ df1 = df1.set_index(['A'])
+ df2 = DataFrame(data=[[1, 4, 7], [2, 5, 8], [3, 6, 9]],
+ columns=['A', 'B', 'C'])
+ df2 = df2.set_index(['A'])
+
+ result = df1.append(df2)
+ self.assertEqual(result.index.name, 'A')
+
+ def test_join_many(self):
+ df = DataFrame(np.random.randn(10, 6), columns=list('abcdef'))
+ df_list = [df[['a', 'b']], df[['c', 'd']], df[['e', 'f']]]
+
+ joined = df_list[0].join(df_list[1:])
+ tm.assert_frame_equal(joined, df)
+
+ df_list = [df[['a', 'b']][:-2],
+ df[['c', 'd']][2:], df[['e', 'f']][1:9]]
+
+ def _check_diff_index(df_list, result, exp_index):
+ reindexed = [x.reindex(exp_index) for x in df_list]
+ expected = reindexed[0].join(reindexed[1:])
+ tm.assert_frame_equal(result, expected)
+
+ # different join types
+ joined = df_list[0].join(df_list[1:], how='outer')
+ _check_diff_index(df_list, joined, df.index)
+
+ joined = df_list[0].join(df_list[1:])
+ _check_diff_index(df_list, joined, df_list[0].index)
+
+ joined = df_list[0].join(df_list[1:], how='inner')
+ _check_diff_index(df_list, joined, df.index[2:8])
+
+ self.assertRaises(ValueError, df_list[0].join, df_list[1:], on='a')
+
+ def test_join_many_mixed(self):
+ df = DataFrame(np.random.randn(8, 4), columns=['A', 'B', 'C', 'D'])
+ df['key'] = ['foo', 'bar'] * 4
+ df1 = df.ix[:, ['A', 'B']]
+ df2 = df.ix[:, ['C', 'D']]
+ df3 = df.ix[:, ['key']]
+
+ result = df1.join([df2, df3])
+ assert_frame_equal(result, df)
+
+ def test_append_missing_column_proper_upcast(self):
+ df1 = DataFrame({'A': np.array([1, 2, 3, 4], dtype='i8')})
+ df2 = DataFrame({'B': np.array([True, False, True, False],
+ dtype=bool)})
+
+ appended = df1.append(df2, ignore_index=True)
+ self.assertEqual(appended['A'].dtype, 'f8')
+ self.assertEqual(appended['B'].dtype, 'O')
+
+ def test_concat_copy(self):
+
+ df = DataFrame(np.random.randn(4, 3))
+ df2 = DataFrame(np.random.randint(0, 10, size=4).reshape(4, 1))
+ df3 = DataFrame({5: 'foo'}, index=range(4))
+
+ # these are actual copies
+ result = concat([df, df2, df3], axis=1, copy=True)
+ for b in result._data.blocks:
+ self.assertIsNone(b.values.base)
+
+ # these are the same
+ result = concat([df, df2, df3], axis=1, copy=False)
+ for b in result._data.blocks:
+ if b.is_float:
+ self.assertTrue(
+ b.values.base is df._data.blocks[0].values.base)
+ elif b.is_integer:
+ self.assertTrue(
+ b.values.base is df2._data.blocks[0].values.base)
+ elif b.is_object:
+ self.assertIsNotNone(b.values.base)
+
+ # float block was consolidated
+ df4 = DataFrame(np.random.randn(4, 1))
+ result = concat([df, df2, df3, df4], axis=1, copy=False)
+ for b in result._data.blocks:
+ if b.is_float:
+ self.assertIsNone(b.values.base)
+ elif b.is_integer:
+ self.assertTrue(
+ b.values.base is df2._data.blocks[0].values.base)
+ elif b.is_object:
+ self.assertIsNotNone(b.values.base)
+
+ def test_concat_with_group_keys(self):
+ df = DataFrame(np.random.randn(4, 3))
+ df2 = DataFrame(np.random.randn(4, 4))
+
+ # axis=0
+ df = DataFrame(np.random.randn(3, 4))
+ df2 = DataFrame(np.random.randn(4, 4))
+
+ result = concat([df, df2], keys=[0, 1])
+ exp_index = MultiIndex.from_arrays([[0, 0, 0, 1, 1, 1, 1],
+ [0, 1, 2, 0, 1, 2, 3]])
+ expected = DataFrame(np.r_[df.values, df2.values],
+ index=exp_index)
+ tm.assert_frame_equal(result, expected)
+
+ result = concat([df, df], keys=[0, 1])
+ exp_index2 = MultiIndex.from_arrays([[0, 0, 0, 1, 1, 1],
+ [0, 1, 2, 0, 1, 2]])
+ expected = DataFrame(np.r_[df.values, df.values],
+ index=exp_index2)
+ tm.assert_frame_equal(result, expected)
+
+ # axis=1
+ df = DataFrame(np.random.randn(4, 3))
+ df2 = DataFrame(np.random.randn(4, 4))
+
+ result = concat([df, df2], keys=[0, 1], axis=1)
+ expected = DataFrame(np.c_[df.values, df2.values],
+ columns=exp_index)
+ tm.assert_frame_equal(result, expected)
+
+ result = concat([df, df], keys=[0, 1], axis=1)
+ expected = DataFrame(np.c_[df.values, df.values],
+ columns=exp_index2)
+ tm.assert_frame_equal(result, expected)
+
+ def test_concat_keys_specific_levels(self):
+ df = DataFrame(np.random.randn(10, 4))
+ pieces = [df.ix[:, [0, 1]], df.ix[:, [2]], df.ix[:, [3]]]
+ level = ['three', 'two', 'one', 'zero']
+ result = concat(pieces, axis=1, keys=['one', 'two', 'three'],
+ levels=[level],
+ names=['group_key'])
+
+ self.assert_numpy_array_equal(result.columns.levels[0], level)
+ self.assertEqual(result.columns.names[0], 'group_key')
+
+ def test_concat_dataframe_keys_bug(self):
+ t1 = DataFrame({
+ 'value': Series([1, 2, 3], index=Index(['a', 'b', 'c'],
+ name='id'))})
+ t2 = DataFrame({
+ 'value': Series([7, 8], index=Index(['a', 'b'], name='id'))})
+
+ # it works
+ result = concat([t1, t2], axis=1, keys=['t1', 't2'])
+ self.assertEqual(list(result.columns), [('t1', 'value'),
+ ('t2', 'value')])
+
+ def test_concat_series_partial_columns_names(self):
+ # GH10698
+ foo = Series([1, 2], name='foo')
+ bar = Series([1, 2])
+ baz = Series([4, 5])
+
+ result = concat([foo, bar, baz], axis=1)
+ expected = DataFrame({'foo': [1, 2], 0: [1, 2], 1: [
+ 4, 5]}, columns=['foo', 0, 1])
+ tm.assert_frame_equal(result, expected)
+
+ result = concat([foo, bar, baz], axis=1, keys=[
+ 'red', 'blue', 'yellow'])
+ expected = DataFrame({'red': [1, 2], 'blue': [1, 2], 'yellow': [
+ 4, 5]}, columns=['red', 'blue', 'yellow'])
+ tm.assert_frame_equal(result, expected)
+
+ result = concat([foo, bar, baz], axis=1, ignore_index=True)
+ expected = DataFrame({0: [1, 2], 1: [1, 2], 2: [4, 5]})
+ tm.assert_frame_equal(result, expected)
+
+ def test_concat_dict(self):
+ frames = {'foo': DataFrame(np.random.randn(4, 3)),
+ 'bar': DataFrame(np.random.randn(4, 3)),
+ 'baz': DataFrame(np.random.randn(4, 3)),
+ 'qux': DataFrame(np.random.randn(4, 3))}
+
+ sorted_keys = sorted(frames)
+
+ result = concat(frames)
+ expected = concat([frames[k] for k in sorted_keys], keys=sorted_keys)
+ tm.assert_frame_equal(result, expected)
+
+ result = concat(frames, axis=1)
+ expected = concat([frames[k] for k in sorted_keys], keys=sorted_keys,
+ axis=1)
+ tm.assert_frame_equal(result, expected)
+
+ keys = ['baz', 'foo', 'bar']
+ result = concat(frames, keys=keys)
+ expected = concat([frames[k] for k in keys], keys=keys)
+ tm.assert_frame_equal(result, expected)
+
+ def test_concat_ignore_index(self):
+ frame1 = DataFrame({"test1": ["a", "b", "c"],
+ "test2": [1, 2, 3],
+ "test3": [4.5, 3.2, 1.2]})
+ frame2 = DataFrame({"test3": [5.2, 2.2, 4.3]})
+ frame1.index = Index(["x", "y", "z"])
+ frame2.index = Index(["x", "y", "q"])
+
+ v1 = concat([frame1, frame2], axis=1, ignore_index=True)
+
+ nan = np.nan
+ expected = DataFrame([[nan, nan, nan, 4.3],
+ ['a', 1, 4.5, 5.2],
+ ['b', 2, 3.2, 2.2],
+ ['c', 3, 1.2, nan]],
+ index=Index(["q", "x", "y", "z"]))
+
+ tm.assert_frame_equal(v1, expected)
+
+ def test_concat_multiindex_with_keys(self):
+ index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
+ ['one', 'two', 'three']],
+ labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
+ [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
+ names=['first', 'second'])
+ frame = DataFrame(np.random.randn(10, 3), index=index,
+ columns=Index(['A', 'B', 'C'], name='exp'))
+ result = concat([frame, frame], keys=[0, 1], names=['iteration'])
+
+ self.assertEqual(result.index.names, ('iteration',) + index.names)
+ tm.assert_frame_equal(result.ix[0], frame)
+ tm.assert_frame_equal(result.ix[1], frame)
+ self.assertEqual(result.index.nlevels, 3)
+
+ def test_concat_multiindex_with_tz(self):
+ # GH 6606
+ df = DataFrame({'dt': [datetime(2014, 1, 1),
+ datetime(2014, 1, 2),
+ datetime(2014, 1, 3)],
+ 'b': ['A', 'B', 'C'],
+ 'c': [1, 2, 3], 'd': [4, 5, 6]})
+ df['dt'] = df['dt'].apply(lambda d: Timestamp(d, tz='US/Pacific'))
+ df = df.set_index(['dt', 'b'])
+
+ exp_idx1 = DatetimeIndex(['2014-01-01', '2014-01-02',
+ '2014-01-03'] * 2,
+ tz='US/Pacific', name='dt')
+ exp_idx2 = Index(['A', 'B', 'C'] * 2, name='b')
+ exp_idx = MultiIndex.from_arrays([exp_idx1, exp_idx2])
+ expected = DataFrame({'c': [1, 2, 3] * 2, 'd': [4, 5, 6] * 2},
+ index=exp_idx, columns=['c', 'd'])
+
+ result = concat([df, df])
+ tm.assert_frame_equal(result, expected)
+
+ def test_concat_keys_and_levels(self):
+ df = DataFrame(np.random.randn(1, 3))
+ df2 = DataFrame(np.random.randn(1, 4))
+
+ levels = [['foo', 'baz'], ['one', 'two']]
+ names = ['first', 'second']
+ result = concat([df, df2, df, df2],
+ keys=[('foo', 'one'), ('foo', 'two'),
+ ('baz', 'one'), ('baz', 'two')],
+ levels=levels,
+ names=names)
+ expected = concat([df, df2, df, df2])
+ exp_index = MultiIndex(levels=levels + [[0]],
+ labels=[[0, 0, 1, 1], [0, 1, 0, 1],
+ [0, 0, 0, 0]],
+ names=names + [None])
+ expected.index = exp_index
+
+ assert_frame_equal(result, expected)
+
+ # no names
+
+ result = concat([df, df2, df, df2],
+ keys=[('foo', 'one'), ('foo', 'two'),
+ ('baz', 'one'), ('baz', 'two')],
+ levels=levels)
+ self.assertEqual(result.index.names, (None,) * 3)
+
+ # no levels
+ result = concat([df, df2, df, df2],
+ keys=[('foo', 'one'), ('foo', 'two'),
+ ('baz', 'one'), ('baz', 'two')],
+ names=['first', 'second'])
+ self.assertEqual(result.index.names, ('first', 'second') + (None,))
+ self.assert_numpy_array_equal(result.index.levels[0], ['baz', 'foo'])
+
+ def test_concat_keys_levels_no_overlap(self):
+ # GH #1406
+ df = DataFrame(np.random.randn(1, 3), index=['a'])
+ df2 = DataFrame(np.random.randn(1, 4), index=['b'])
+
+ self.assertRaises(ValueError, concat, [df, df],
+ keys=['one', 'two'], levels=[['foo', 'bar', 'baz']])
+
+ self.assertRaises(ValueError, concat, [df, df2],
+ keys=['one', 'two'], levels=[['foo', 'bar', 'baz']])
+
+ def test_concat_rename_index(self):
+ a = DataFrame(np.random.rand(3, 3),
+ columns=list('ABC'),
+ index=Index(list('abc'), name='index_a'))
+ b = DataFrame(np.random.rand(3, 3),
+ columns=list('ABC'),
+ index=Index(list('abc'), name='index_b'))
+
+ result = concat([a, b], keys=['key0', 'key1'],
+ names=['lvl0', 'lvl1'])
+
+ exp = concat([a, b], keys=['key0', 'key1'], names=['lvl0'])
+ names = list(exp.index.names)
+ names[1] = 'lvl1'
+ exp.index.set_names(names, inplace=True)
+
+ tm.assert_frame_equal(result, exp)
+ self.assertEqual(result.index.names, exp.index.names)
+
+ def test_crossed_dtypes_weird_corner(self):
+ columns = ['A', 'B', 'C', 'D']
+ df1 = DataFrame({'A': np.array([1, 2, 3, 4], dtype='f8'),
+ 'B': np.array([1, 2, 3, 4], dtype='i8'),
+ 'C': np.array([1, 2, 3, 4], dtype='f8'),
+ 'D': np.array([1, 2, 3, 4], dtype='i8')},
+ columns=columns)
+
+ df2 = DataFrame({'A': np.array([1, 2, 3, 4], dtype='i8'),
+ 'B': np.array([1, 2, 3, 4], dtype='f8'),
+ 'C': np.array([1, 2, 3, 4], dtype='i8'),
+ 'D': np.array([1, 2, 3, 4], dtype='f8')},
+ columns=columns)
+
+ appended = df1.append(df2, ignore_index=True)
+ expected = DataFrame(np.concatenate([df1.values, df2.values], axis=0),
+ columns=columns)
+ tm.assert_frame_equal(appended, expected)
+
+ df = DataFrame(np.random.randn(1, 3), index=['a'])
+ df2 = DataFrame(np.random.randn(1, 4), index=['b'])
+ result = concat(
+ [df, df2], keys=['one', 'two'], names=['first', 'second'])
+ self.assertEqual(result.index.names, ('first', 'second'))
+
+ def test_dups_index(self):
+ # GH 4771
+
+ # single dtypes
+ df = DataFrame(np.random.randint(0, 10, size=40).reshape(
+ 10, 4), columns=['A', 'A', 'C', 'C'])
+
+ result = concat([df, df], axis=1)
+ assert_frame_equal(result.iloc[:, :4], df)
+ assert_frame_equal(result.iloc[:, 4:], df)
+
+ result = concat([df, df], axis=0)
+ assert_frame_equal(result.iloc[:10], df)
+ assert_frame_equal(result.iloc[10:], df)
+
+ # multi dtypes
+ df = concat([DataFrame(np.random.randn(10, 4),
+ columns=['A', 'A', 'B', 'B']),
+ DataFrame(np.random.randint(0, 10, size=20)
+ .reshape(10, 2),
+ columns=['A', 'C'])],
+ axis=1)
+
+ result = concat([df, df], axis=1)
+ assert_frame_equal(result.iloc[:, :6], df)
+ assert_frame_equal(result.iloc[:, 6:], df)
+
+ result = concat([df, df], axis=0)
+ assert_frame_equal(result.iloc[:10], df)
+ assert_frame_equal(result.iloc[10:], df)
+
+ # append
+ result = df.iloc[0:8, :].append(df.iloc[8:])
+ assert_frame_equal(result, df)
+
+ result = df.iloc[0:8, :].append(df.iloc[8:9]).append(df.iloc[9:10])
+ assert_frame_equal(result, df)
+
+ expected = concat([df, df], axis=0)
+ result = df.append(df)
+ assert_frame_equal(result, expected)
+
+ def test_with_mixed_tuples(self):
+ # 10697
+ # columns have mixed tuples, so handle properly
+ df1 = DataFrame({u'A': 'foo', (u'B', 1): 'bar'}, index=range(2))
+ df2 = DataFrame({u'B': 'foo', (u'B', 1): 'bar'}, index=range(2))
+
+ # it works
+ concat([df1, df2])
+
+ def test_join_dups(self):
+
+ # joining dups
+ df = concat([DataFrame(np.random.randn(10, 4),
+ columns=['A', 'A', 'B', 'B']),
+ DataFrame(np.random.randint(0, 10, size=20)
+ .reshape(10, 2),
+ columns=['A', 'C'])],
+ axis=1)
+
+ expected = concat([df, df], axis=1)
+ result = df.join(df, rsuffix='_2')
+ result.columns = expected.columns
+ assert_frame_equal(result, expected)
+
+ # GH 4975, invalid join on dups
+ w = DataFrame(np.random.randn(4, 2), columns=["x", "y"])
+ x = DataFrame(np.random.randn(4, 2), columns=["x", "y"])
+ y = DataFrame(np.random.randn(4, 2), columns=["x", "y"])
+ z = DataFrame(np.random.randn(4, 2), columns=["x", "y"])
+
+ dta = x.merge(y, left_index=True, right_index=True).merge(
+ z, left_index=True, right_index=True, how="outer")
+ dta = dta.merge(w, left_index=True, right_index=True)
+ expected = concat([x, y, z, w], axis=1)
+ expected.columns = ['x_x', 'y_x', 'x_y',
+ 'y_y', 'x_x', 'y_x', 'x_y', 'y_y']
+ assert_frame_equal(dta, expected)
+
+ def test_handle_empty_objects(self):
+ df = DataFrame(np.random.randn(10, 4), columns=list('abcd'))
+
+ baz = df[:5].copy()
+ baz['foo'] = 'bar'
+ empty = df[5:5]
+
+ frames = [baz, empty, empty, df[5:]]
+ concatted = concat(frames, axis=0)
+
+ expected = df.ix[:, ['a', 'b', 'c', 'd', 'foo']]
+ expected['foo'] = expected['foo'].astype('O')
+ expected.loc[0:4, 'foo'] = 'bar'
+
+ tm.assert_frame_equal(concatted, expected)
+
+ # empty as first element with time series
+ # GH3259
+ df = DataFrame(dict(A=range(10000)), index=date_range(
+ '20130101', periods=10000, freq='s'))
+ empty = DataFrame()
+ result = concat([df, empty], axis=1)
+ assert_frame_equal(result, df)
+ result = concat([empty, df], axis=1)
+ assert_frame_equal(result, df)
+
+ result = concat([df, empty])
+ assert_frame_equal(result, df)
+ result = concat([empty, df])
+ assert_frame_equal(result, df)
+
+ def test_concat_mixed_objs(self):
+
+ # concat mixed series/frames
+ # G2385
+
+ # axis 1
+ index = date_range('01-Jan-2013', periods=10, freq='H')
+ arr = np.arange(10, dtype='int64')
+ s1 = Series(arr, index=index)
+ s2 = Series(arr, index=index)
+ df = DataFrame(arr.reshape(-1, 1), index=index)
+
+ expected = DataFrame(np.repeat(arr, 2).reshape(-1, 2),
+ index=index, columns=[0, 0])
+ result = concat([df, df], axis=1)
+ assert_frame_equal(result, expected)
+
+ expected = DataFrame(np.repeat(arr, 2).reshape(-1, 2),
+ index=index, columns=[0, 1])
+ result = concat([s1, s2], axis=1)
+ assert_frame_equal(result, expected)
+
+ expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3),
+ index=index, columns=[0, 1, 2])
+ result = concat([s1, s2, s1], axis=1)
+ assert_frame_equal(result, expected)
+
+ expected = DataFrame(np.repeat(arr, 5).reshape(-1, 5),
+ index=index, columns=[0, 0, 1, 2, 3])
+ result = concat([s1, df, s2, s2, s1], axis=1)
+ assert_frame_equal(result, expected)
+
+ # with names
+ s1.name = 'foo'
+ expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3),
+ index=index, columns=['foo', 0, 0])
+ result = concat([s1, df, s2], axis=1)
+ assert_frame_equal(result, expected)
+
+ s2.name = 'bar'
+ expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3),
+ index=index, columns=['foo', 0, 'bar'])
+ result = concat([s1, df, s2], axis=1)
+ assert_frame_equal(result, expected)
+
+ # ignore index
+ expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3),
+ index=index, columns=[0, 1, 2])
+ result = concat([s1, df, s2], axis=1, ignore_index=True)
+ assert_frame_equal(result, expected)
+
+ # axis 0
+ expected = DataFrame(np.tile(arr, 3).reshape(-1, 1),
+ index=index.tolist() * 3, columns=[0])
+ result = concat([s1, df, s2])
+ assert_frame_equal(result, expected)
+
+ expected = DataFrame(np.tile(arr, 3).reshape(-1, 1), columns=[0])
+ result = concat([s1, df, s2], ignore_index=True)
+ assert_frame_equal(result, expected)
+
+ # invalid concatente of mixed dims
+ panel = tm.makePanel()
+ self.assertRaises(ValueError, lambda: concat([panel, s1], axis=1))
+
+ def test_panel_join(self):
+ panel = tm.makePanel()
+ tm.add_nans(panel)
+
+ p1 = panel.ix[:2, :10, :3]
+ p2 = panel.ix[2:, 5:, 2:]
+
+ # left join
+ result = p1.join(p2)
+ expected = p1.copy()
+ expected['ItemC'] = p2['ItemC']
+ tm.assert_panel_equal(result, expected)
+
+ # right join
+ result = p1.join(p2, how='right')
+ expected = p2.copy()
+ expected['ItemA'] = p1['ItemA']
+ expected['ItemB'] = p1['ItemB']
+ expected = expected.reindex(items=['ItemA', 'ItemB', 'ItemC'])
+ tm.assert_panel_equal(result, expected)
+
+ # inner join
+ result = p1.join(p2, how='inner')
+ expected = panel.ix[:, 5:10, 2:3]
+ tm.assert_panel_equal(result, expected)
+
+ # outer join
+ result = p1.join(p2, how='outer')
+ expected = p1.reindex(major=panel.major_axis,
+ minor=panel.minor_axis)
+ expected = expected.join(p2.reindex(major=panel.major_axis,
+ minor=panel.minor_axis))
+ tm.assert_panel_equal(result, expected)
+
+ def test_panel_join_overlap(self):
+ panel = tm.makePanel()
+ tm.add_nans(panel)
+
+ p1 = panel.ix[['ItemA', 'ItemB', 'ItemC']]
+ p2 = panel.ix[['ItemB', 'ItemC']]
+
+ # Expected index is
+ #
+ # ItemA, ItemB_p1, ItemC_p1, ItemB_p2, ItemC_p2
+ joined = p1.join(p2, lsuffix='_p1', rsuffix='_p2')
+ p1_suf = p1.ix[['ItemB', 'ItemC']].add_suffix('_p1')
+ p2_suf = p2.ix[['ItemB', 'ItemC']].add_suffix('_p2')
+ no_overlap = panel.ix[['ItemA']]
+ expected = no_overlap.join(p1_suf.join(p2_suf))
+ tm.assert_panel_equal(joined, expected)
+
+ def test_panel_join_many(self):
+ tm.K = 10
+ panel = tm.makePanel()
+ tm.K = 4
+
+ panels = [panel.ix[:2], panel.ix[2:6], panel.ix[6:]]
+
+ joined = panels[0].join(panels[1:])
+ tm.assert_panel_equal(joined, panel)
+
+ panels = [panel.ix[:2, :-5], panel.ix[2:6, 2:], panel.ix[6:, 5:-7]]
+
+ data_dict = {}
+ for p in panels:
+ data_dict.update(p.iteritems())
+
+ joined = panels[0].join(panels[1:], how='inner')
+ expected = Panel.from_dict(data_dict, intersect=True)
+ tm.assert_panel_equal(joined, expected)
+
+ joined = panels[0].join(panels[1:], how='outer')
+ expected = Panel.from_dict(data_dict, intersect=False)
+ tm.assert_panel_equal(joined, expected)
+
+ # edge cases
+ self.assertRaises(ValueError, panels[0].join, panels[1:],
+ how='outer', lsuffix='foo', rsuffix='bar')
+ self.assertRaises(ValueError, panels[0].join, panels[1:],
+ how='right')
+
+ def test_panel_concat_other_axes(self):
+ panel = tm.makePanel()
+
+ p1 = panel.ix[:, :5, :]
+ p2 = panel.ix[:, 5:, :]
+
+ result = concat([p1, p2], axis=1)
+ tm.assert_panel_equal(result, panel)
+
+ p1 = panel.ix[:, :, :2]
+ p2 = panel.ix[:, :, 2:]
+
+ result = concat([p1, p2], axis=2)
+ tm.assert_panel_equal(result, panel)
+
+ # if things are a bit misbehaved
+ p1 = panel.ix[:2, :, :2]
+ p2 = panel.ix[:, :, 2:]
+ p1['ItemC'] = 'baz'
+
+ result = concat([p1, p2], axis=2)
+
+ expected = panel.copy()
+ expected['ItemC'] = expected['ItemC'].astype('O')
+ expected.ix['ItemC', :, :2] = 'baz'
+ tm.assert_panel_equal(result, expected)
+
+ def test_panel_concat_buglet(self):
+ # #2257
+ def make_panel():
+ index = 5
+ cols = 3
+
+ def df():
+ return DataFrame(np.random.randn(index, cols),
+ index=["I%s" % i for i in range(index)],
+ columns=["C%s" % i for i in range(cols)])
+ return Panel(dict([("Item%s" % x, df()) for x in ['A', 'B', 'C']]))
+
+ panel1 = make_panel()
+ panel2 = make_panel()
+
+ panel2 = panel2.rename_axis(dict([(x, "%s_1" % x)
+ for x in panel2.major_axis]),
+ axis=1)
+
+ panel3 = panel2.rename_axis(lambda x: '%s_1' % x, axis=1)
+ panel3 = panel3.rename_axis(lambda x: '%s_1' % x, axis=2)
+
+ # it works!
+ concat([panel1, panel3], axis=1, verify_integrity=True)
+
+ def test_panel4d_concat(self):
+ p4d = tm.makePanel4D()
+
+ p1 = p4d.ix[:, :, :5, :]
+ p2 = p4d.ix[:, :, 5:, :]
+
+ result = concat([p1, p2], axis=2)
+ tm.assert_panel4d_equal(result, p4d)
+
+ p1 = p4d.ix[:, :, :, :2]
+ p2 = p4d.ix[:, :, :, 2:]
+
+ result = concat([p1, p2], axis=3)
+ tm.assert_panel4d_equal(result, p4d)
+
+ def test_panel4d_concat_mixed_type(self):
+ p4d = tm.makePanel4D()
+
+ # if things are a bit misbehaved
+ p1 = p4d.ix[:, :2, :, :2]
+ p2 = p4d.ix[:, :, :, 2:]
+ p1['L5'] = 'baz'
+
+ result = concat([p1, p2], axis=3)
+
+ p2['L5'] = np.nan
+ expected = concat([p1, p2], axis=3)
+ expected = expected.ix[result.labels]
+
+ tm.assert_panel4d_equal(result, expected)
+
+ def test_concat_series(self):
+
+ ts = tm.makeTimeSeries()
+ ts.name = 'foo'
+
+ pieces = [ts[:5], ts[5:15], ts[15:]]
+
+ result = concat(pieces)
+ tm.assert_series_equal(result, ts)
+ self.assertEqual(result.name, ts.name)
+
+ result = concat(pieces, keys=[0, 1, 2])
+ expected = ts.copy()
+
+ ts.index = DatetimeIndex(np.array(ts.index.values, dtype='M8[ns]'))
+
+ exp_labels = [np.repeat([0, 1, 2], [len(x) for x in pieces]),
+ np.arange(len(ts))]
+ exp_index = MultiIndex(levels=[[0, 1, 2], ts.index],
+ labels=exp_labels)
+ expected.index = exp_index
+ tm.assert_series_equal(result, expected)
+
+ def test_concat_series_axis1(self):
+ ts = tm.makeTimeSeries()
+
+ pieces = [ts[:-2], ts[2:], ts[2:-2]]
+
+ result = concat(pieces, axis=1)
+ expected = DataFrame(pieces).T
+ assert_frame_equal(result, expected)
+
+ result = concat(pieces, keys=['A', 'B', 'C'], axis=1)
+ expected = DataFrame(pieces, index=['A', 'B', 'C']).T
+ assert_frame_equal(result, expected)
+
+ # preserve series names, #2489
+ s = Series(randn(5), name='A')
+ s2 = Series(randn(5), name='B')
+
+ result = concat([s, s2], axis=1)
+ expected = DataFrame({'A': s, 'B': s2})
+ assert_frame_equal(result, expected)
+
+ s2.name = None
+ result = concat([s, s2], axis=1)
+ self.assertTrue(np.array_equal(
+ result.columns, Index(['A', 0], dtype='object')))
+
+ # must reindex, #2603
+ s = Series(randn(3), index=['c', 'a', 'b'], name='A')
+ s2 = Series(randn(4), index=['d', 'a', 'b', 'c'], name='B')
+ result = concat([s, s2], axis=1)
+ expected = DataFrame({'A': s, 'B': s2})
+ assert_frame_equal(result, expected)
+
+ def test_concat_single_with_key(self):
+ df = DataFrame(np.random.randn(10, 4))
+
+ result = concat([df], keys=['foo'])
+ expected = concat([df, df], keys=['foo', 'bar'])
+ tm.assert_frame_equal(result, expected[:10])
+
+ def test_concat_exclude_none(self):
+ df = DataFrame(np.random.randn(10, 4))
+
+ pieces = [df[:5], None, None, df[5:]]
+ result = concat(pieces)
+ tm.assert_frame_equal(result, df)
+ self.assertRaises(ValueError, concat, [None, None])
+
+ def test_concat_datetime64_block(self):
+ from pandas.tseries.index import date_range
+
+ rng = date_range('1/1/2000', periods=10)
+
+ df = DataFrame({'time': rng})
+
+ result = concat([df, df])
+ self.assertTrue((result.iloc[:10]['time'] == rng).all())
+ self.assertTrue((result.iloc[10:]['time'] == rng).all())
+
+ def test_concat_timedelta64_block(self):
+ from pandas import to_timedelta
+
+ rng = to_timedelta(np.arange(10), unit='s')
+
+ df = DataFrame({'time': rng})
+
+ result = concat([df, df])
+ self.assertTrue((result.iloc[:10]['time'] == rng).all())
+ self.assertTrue((result.iloc[10:]['time'] == rng).all())
+
+ def test_concat_keys_with_none(self):
+ # #1649
+ df0 = DataFrame([[10, 20, 30], [10, 20, 30], [10, 20, 30]])
+
+ result = concat(dict(a=None, b=df0, c=df0[:2], d=df0[:1], e=df0))
+ expected = concat(dict(b=df0, c=df0[:2], d=df0[:1], e=df0))
+ tm.assert_frame_equal(result, expected)
+
+ result = concat([None, df0, df0[:2], df0[:1], df0],
+ keys=['a', 'b', 'c', 'd', 'e'])
+ expected = concat([df0, df0[:2], df0[:1], df0],
+ keys=['b', 'c', 'd', 'e'])
+ tm.assert_frame_equal(result, expected)
+
+ def test_concat_bug_1719(self):
+ ts1 = tm.makeTimeSeries()
+ ts2 = tm.makeTimeSeries()[::2]
+
+ # to join with union
+ # these two are of different length!
+ left = concat([ts1, ts2], join='outer', axis=1)
+ right = concat([ts2, ts1], join='outer', axis=1)
+
+ self.assertEqual(len(left), len(right))
+
+ def test_concat_bug_2972(self):
+ ts0 = Series(np.zeros(5))
+ ts1 = Series(np.ones(5))
+ ts0.name = ts1.name = 'same name'
+ result = concat([ts0, ts1], axis=1)
+
+ expected = DataFrame({0: ts0, 1: ts1})
+ expected.columns = ['same name', 'same name']
+ assert_frame_equal(result, expected)
+
+ def test_concat_bug_3602(self):
+
+ # GH 3602, duplicate columns
+ df1 = DataFrame({'firmNo': [0, 0, 0, 0], 'stringvar': [
+ 'rrr', 'rrr', 'rrr', 'rrr'], 'prc': [6, 6, 6, 6]})
+ df2 = DataFrame({'misc': [1, 2, 3, 4], 'prc': [
+ 6, 6, 6, 6], 'C': [9, 10, 11, 12]})
+ expected = DataFrame([[0, 6, 'rrr', 9, 1, 6],
+ [0, 6, 'rrr', 10, 2, 6],
+ [0, 6, 'rrr', 11, 3, 6],
+ [0, 6, 'rrr', 12, 4, 6]])
+ expected.columns = ['firmNo', 'prc', 'stringvar', 'C', 'misc', 'prc']
+
+ result = concat([df1, df2], axis=1)
+ assert_frame_equal(result, expected)
+
+ def test_concat_series_axis1_same_names_ignore_index(self):
+ dates = date_range('01-Jan-2013', '01-Jan-2014', freq='MS')[0:-1]
+ s1 = Series(randn(len(dates)), index=dates, name='value')
+ s2 = Series(randn(len(dates)), index=dates, name='value')
+
+ result = concat([s1, s2], axis=1, ignore_index=True)
+ self.assertTrue(np.array_equal(result.columns, [0, 1]))
+
+ def test_concat_iterables(self):
+ from collections import deque, Iterable
+
+ # GH8645 check concat works with tuples, list, generators, and weird
+ # stuff like deque and custom iterables
+ df1 = DataFrame([1, 2, 3])
+ df2 = DataFrame([4, 5, 6])
+ expected = DataFrame([1, 2, 3, 4, 5, 6])
+ assert_frame_equal(concat((df1, df2), ignore_index=True), expected)
+ assert_frame_equal(concat([df1, df2], ignore_index=True), expected)
+ assert_frame_equal(concat((df for df in (df1, df2)),
+ ignore_index=True), expected)
+ assert_frame_equal(
+ concat(deque((df1, df2)), ignore_index=True), expected)
+
+ class CustomIterator1(object):
+
+ def __len__(self):
+ return 2
+
+ def __getitem__(self, index):
+ try:
+ return {0: df1, 1: df2}[index]
+ except KeyError:
+ raise IndexError
+ assert_frame_equal(pd.concat(CustomIterator1(),
+ ignore_index=True), expected)
+
+ class CustomIterator2(Iterable):
+
+ def __iter__(self):
+ yield df1
+ yield df2
+ assert_frame_equal(pd.concat(CustomIterator2(),
+ ignore_index=True), expected)
+
+ def test_concat_invalid(self):
+
+ # trying to concat a ndframe with a non-ndframe
+ df1 = mkdf(10, 2)
+ for obj in [1, dict(), [1, 2], (1, 2)]:
+ self.assertRaises(TypeError, lambda x: concat([df1, obj]))
+
+ def test_concat_invalid_first_argument(self):
+ df1 = mkdf(10, 2)
+ df2 = mkdf(10, 2)
+ self.assertRaises(TypeError, concat, df1, df2)
+
+ # generator ok though
+ concat(DataFrame(np.random.rand(5, 5)) for _ in range(3))
+
+ # text reader ok
+ # GH6583
+ data = """index,A,B,C,D
+foo,2,3,4,5
+bar,7,8,9,10
+baz,12,13,14,15
+qux,12,13,14,15
+foo2,12,13,14,15
+bar2,12,13,14,15
+"""
+
+ reader = read_csv(StringIO(data), chunksize=1)
+ result = concat(reader, ignore_index=True)
+ expected = read_csv(StringIO(data))
+ assert_frame_equal(result, expected)
+
+
+if __name__ == '__main__':
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
+ exit=False)
diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py
index 474ce0f899217..2f3a8f77af09b 100644
--- a/pandas/tools/tests/test_merge.py
+++ b/pandas/tools/tests/test_merge.py
@@ -9,17 +9,14 @@
import random
import pandas as pd
-from pandas.compat import range, lrange, lzip, StringIO
-from pandas import compat
-from pandas.tseries.index import DatetimeIndex
-from pandas.tools.merge import merge, concat, ordered_merge, MergeError
-from pandas import Categorical, Timestamp
-from pandas.util.testing import (assert_frame_equal, assert_series_equal,
- assert_almost_equal,
- makeCustomDataframe as mkdf,
- assertRaisesRegexp, slow)
-from pandas import (isnull, DataFrame, Index, MultiIndex, Panel,
- Series, date_range, read_csv)
+from pandas.compat import range, lrange, lzip
+from pandas.tools.merge import merge, concat, MergeError
+from pandas.util.testing import (assert_frame_equal,
+ assert_series_equal,
+ slow)
+from pandas import (DataFrame, Index, MultiIndex,
+ Series, date_range, Categorical,
+ compat)
import pandas.algos as algos
import pandas.util.testing as tm
@@ -2159,1100 +2156,6 @@ def _join_by_hand(a, b, how='left'):
return a_re.reindex(columns=result_columns)
-class TestConcatenate(tm.TestCase):
-
- _multiprocess_can_split_ = True
-
- def setUp(self):
- self.frame = DataFrame(tm.getSeriesData())
- self.mixed_frame = self.frame.copy()
- self.mixed_frame['foo'] = 'bar'
-
- def test_append(self):
- begin_index = self.frame.index[:5]
- end_index = self.frame.index[5:]
-
- begin_frame = self.frame.reindex(begin_index)
- end_frame = self.frame.reindex(end_index)
-
- appended = begin_frame.append(end_frame)
- assert_almost_equal(appended['A'], self.frame['A'])
-
- del end_frame['A']
- partial_appended = begin_frame.append(end_frame)
- self.assertIn('A', partial_appended)
-
- partial_appended = end_frame.append(begin_frame)
- self.assertIn('A', partial_appended)
-
- # mixed type handling
- appended = self.mixed_frame[:5].append(self.mixed_frame[5:])
- assert_frame_equal(appended, self.mixed_frame)
-
- # what to test here
- mixed_appended = self.mixed_frame[:5].append(self.frame[5:])
- mixed_appended2 = self.frame[:5].append(self.mixed_frame[5:])
-
- # all equal except 'foo' column
- assert_frame_equal(
- mixed_appended.reindex(columns=['A', 'B', 'C', 'D']),
- mixed_appended2.reindex(columns=['A', 'B', 'C', 'D']))
-
- # append empty
- empty = DataFrame({})
-
- appended = self.frame.append(empty)
- assert_frame_equal(self.frame, appended)
- self.assertIsNot(appended, self.frame)
-
- appended = empty.append(self.frame)
- assert_frame_equal(self.frame, appended)
- self.assertIsNot(appended, self.frame)
-
- # overlap
- self.assertRaises(ValueError, self.frame.append, self.frame,
- verify_integrity=True)
-
- # new columns
- # GH 6129
- df = DataFrame({'a': {'x': 1, 'y': 2}, 'b': {'x': 3, 'y': 4}})
- row = Series([5, 6, 7], index=['a', 'b', 'c'], name='z')
- expected = DataFrame({'a': {'x': 1, 'y': 2, 'z': 5}, 'b': {
- 'x': 3, 'y': 4, 'z': 6}, 'c': {'z': 7}})
- result = df.append(row)
- assert_frame_equal(result, expected)
-
- def test_append_length0_frame(self):
- df = DataFrame(columns=['A', 'B', 'C'])
- df3 = DataFrame(index=[0, 1], columns=['A', 'B'])
- df5 = df.append(df3)
-
- expected = DataFrame(index=[0, 1], columns=['A', 'B', 'C'])
- assert_frame_equal(df5, expected)
-
- def test_append_records(self):
- arr1 = np.zeros((2,), dtype=('i4,f4,a10'))
- arr1[:] = [(1, 2., 'Hello'), (2, 3., "World")]
-
- arr2 = np.zeros((3,), dtype=('i4,f4,a10'))
- arr2[:] = [(3, 4., 'foo'),
- (5, 6., "bar"),
- (7., 8., 'baz')]
-
- df1 = DataFrame(arr1)
- df2 = DataFrame(arr2)
-
- result = df1.append(df2, ignore_index=True)
- expected = DataFrame(np.concatenate((arr1, arr2)))
- assert_frame_equal(result, expected)
-
- def test_append_different_columns(self):
- df = DataFrame({'bools': np.random.randn(10) > 0,
- 'ints': np.random.randint(0, 10, 10),
- 'floats': np.random.randn(10),
- 'strings': ['foo', 'bar'] * 5})
-
- a = df[:5].ix[:, ['bools', 'ints', 'floats']]
- b = df[5:].ix[:, ['strings', 'ints', 'floats']]
-
- appended = a.append(b)
- self.assertTrue(isnull(appended['strings'][0:4]).all())
- self.assertTrue(isnull(appended['bools'][5:]).all())
-
- def test_append_many(self):
- chunks = [self.frame[:5], self.frame[5:10],
- self.frame[10:15], self.frame[15:]]
-
- result = chunks[0].append(chunks[1:])
- tm.assert_frame_equal(result, self.frame)
-
- chunks[-1] = chunks[-1].copy()
- chunks[-1]['foo'] = 'bar'
- result = chunks[0].append(chunks[1:])
- tm.assert_frame_equal(result.ix[:, self.frame.columns], self.frame)
- self.assertTrue((result['foo'][15:] == 'bar').all())
- self.assertTrue(result['foo'][:15].isnull().all())
-
- def test_append_preserve_index_name(self):
- # #980
- df1 = DataFrame(data=None, columns=['A', 'B', 'C'])
- df1 = df1.set_index(['A'])
- df2 = DataFrame(data=[[1, 4, 7], [2, 5, 8], [3, 6, 9]],
- columns=['A', 'B', 'C'])
- df2 = df2.set_index(['A'])
-
- result = df1.append(df2)
- self.assertEqual(result.index.name, 'A')
-
- def test_join_many(self):
- df = DataFrame(np.random.randn(10, 6), columns=list('abcdef'))
- df_list = [df[['a', 'b']], df[['c', 'd']], df[['e', 'f']]]
-
- joined = df_list[0].join(df_list[1:])
- tm.assert_frame_equal(joined, df)
-
- df_list = [df[['a', 'b']][:-2],
- df[['c', 'd']][2:], df[['e', 'f']][1:9]]
-
- def _check_diff_index(df_list, result, exp_index):
- reindexed = [x.reindex(exp_index) for x in df_list]
- expected = reindexed[0].join(reindexed[1:])
- tm.assert_frame_equal(result, expected)
-
- # different join types
- joined = df_list[0].join(df_list[1:], how='outer')
- _check_diff_index(df_list, joined, df.index)
-
- joined = df_list[0].join(df_list[1:])
- _check_diff_index(df_list, joined, df_list[0].index)
-
- joined = df_list[0].join(df_list[1:], how='inner')
- _check_diff_index(df_list, joined, df.index[2:8])
-
- self.assertRaises(ValueError, df_list[0].join, df_list[1:], on='a')
-
- def test_join_many_mixed(self):
- df = DataFrame(np.random.randn(8, 4), columns=['A', 'B', 'C', 'D'])
- df['key'] = ['foo', 'bar'] * 4
- df1 = df.ix[:, ['A', 'B']]
- df2 = df.ix[:, ['C', 'D']]
- df3 = df.ix[:, ['key']]
-
- result = df1.join([df2, df3])
- assert_frame_equal(result, df)
-
- def test_append_missing_column_proper_upcast(self):
- df1 = DataFrame({'A': np.array([1, 2, 3, 4], dtype='i8')})
- df2 = DataFrame({'B': np.array([True, False, True, False],
- dtype=bool)})
-
- appended = df1.append(df2, ignore_index=True)
- self.assertEqual(appended['A'].dtype, 'f8')
- self.assertEqual(appended['B'].dtype, 'O')
-
- def test_concat_copy(self):
-
- df = DataFrame(np.random.randn(4, 3))
- df2 = DataFrame(np.random.randint(0, 10, size=4).reshape(4, 1))
- df3 = DataFrame({5: 'foo'}, index=range(4))
-
- # these are actual copies
- result = concat([df, df2, df3], axis=1, copy=True)
- for b in result._data.blocks:
- self.assertIsNone(b.values.base)
-
- # these are the same
- result = concat([df, df2, df3], axis=1, copy=False)
- for b in result._data.blocks:
- if b.is_float:
- self.assertTrue(
- b.values.base is df._data.blocks[0].values.base)
- elif b.is_integer:
- self.assertTrue(
- b.values.base is df2._data.blocks[0].values.base)
- elif b.is_object:
- self.assertIsNotNone(b.values.base)
-
- # float block was consolidated
- df4 = DataFrame(np.random.randn(4, 1))
- result = concat([df, df2, df3, df4], axis=1, copy=False)
- for b in result._data.blocks:
- if b.is_float:
- self.assertIsNone(b.values.base)
- elif b.is_integer:
- self.assertTrue(
- b.values.base is df2._data.blocks[0].values.base)
- elif b.is_object:
- self.assertIsNotNone(b.values.base)
-
- def test_concat_with_group_keys(self):
- df = DataFrame(np.random.randn(4, 3))
- df2 = DataFrame(np.random.randn(4, 4))
-
- # axis=0
- df = DataFrame(np.random.randn(3, 4))
- df2 = DataFrame(np.random.randn(4, 4))
-
- result = concat([df, df2], keys=[0, 1])
- exp_index = MultiIndex.from_arrays([[0, 0, 0, 1, 1, 1, 1],
- [0, 1, 2, 0, 1, 2, 3]])
- expected = DataFrame(np.r_[df.values, df2.values],
- index=exp_index)
- tm.assert_frame_equal(result, expected)
-
- result = concat([df, df], keys=[0, 1])
- exp_index2 = MultiIndex.from_arrays([[0, 0, 0, 1, 1, 1],
- [0, 1, 2, 0, 1, 2]])
- expected = DataFrame(np.r_[df.values, df.values],
- index=exp_index2)
- tm.assert_frame_equal(result, expected)
-
- # axis=1
- df = DataFrame(np.random.randn(4, 3))
- df2 = DataFrame(np.random.randn(4, 4))
-
- result = concat([df, df2], keys=[0, 1], axis=1)
- expected = DataFrame(np.c_[df.values, df2.values],
- columns=exp_index)
- tm.assert_frame_equal(result, expected)
-
- result = concat([df, df], keys=[0, 1], axis=1)
- expected = DataFrame(np.c_[df.values, df.values],
- columns=exp_index2)
- tm.assert_frame_equal(result, expected)
-
- def test_concat_keys_specific_levels(self):
- df = DataFrame(np.random.randn(10, 4))
- pieces = [df.ix[:, [0, 1]], df.ix[:, [2]], df.ix[:, [3]]]
- level = ['three', 'two', 'one', 'zero']
- result = concat(pieces, axis=1, keys=['one', 'two', 'three'],
- levels=[level],
- names=['group_key'])
-
- self.assert_numpy_array_equal(result.columns.levels[0], level)
- self.assertEqual(result.columns.names[0], 'group_key')
-
- def test_concat_dataframe_keys_bug(self):
- t1 = DataFrame({
- 'value': Series([1, 2, 3], index=Index(['a', 'b', 'c'],
- name='id'))})
- t2 = DataFrame({
- 'value': Series([7, 8], index=Index(['a', 'b'], name='id'))})
-
- # it works
- result = concat([t1, t2], axis=1, keys=['t1', 't2'])
- self.assertEqual(list(result.columns), [('t1', 'value'),
- ('t2', 'value')])
-
- def test_concat_series_partial_columns_names(self):
- # GH10698
- foo = Series([1, 2], name='foo')
- bar = Series([1, 2])
- baz = Series([4, 5])
-
- result = concat([foo, bar, baz], axis=1)
- expected = DataFrame({'foo': [1, 2], 0: [1, 2], 1: [
- 4, 5]}, columns=['foo', 0, 1])
- tm.assert_frame_equal(result, expected)
-
- result = concat([foo, bar, baz], axis=1, keys=[
- 'red', 'blue', 'yellow'])
- expected = DataFrame({'red': [1, 2], 'blue': [1, 2], 'yellow': [
- 4, 5]}, columns=['red', 'blue', 'yellow'])
- tm.assert_frame_equal(result, expected)
-
- result = concat([foo, bar, baz], axis=1, ignore_index=True)
- expected = DataFrame({0: [1, 2], 1: [1, 2], 2: [4, 5]})
- tm.assert_frame_equal(result, expected)
-
- def test_concat_dict(self):
- frames = {'foo': DataFrame(np.random.randn(4, 3)),
- 'bar': DataFrame(np.random.randn(4, 3)),
- 'baz': DataFrame(np.random.randn(4, 3)),
- 'qux': DataFrame(np.random.randn(4, 3))}
-
- sorted_keys = sorted(frames)
-
- result = concat(frames)
- expected = concat([frames[k] for k in sorted_keys], keys=sorted_keys)
- tm.assert_frame_equal(result, expected)
-
- result = concat(frames, axis=1)
- expected = concat([frames[k] for k in sorted_keys], keys=sorted_keys,
- axis=1)
- tm.assert_frame_equal(result, expected)
-
- keys = ['baz', 'foo', 'bar']
- result = concat(frames, keys=keys)
- expected = concat([frames[k] for k in keys], keys=keys)
- tm.assert_frame_equal(result, expected)
-
- def test_concat_ignore_index(self):
- frame1 = DataFrame({"test1": ["a", "b", "c"],
- "test2": [1, 2, 3],
- "test3": [4.5, 3.2, 1.2]})
- frame2 = DataFrame({"test3": [5.2, 2.2, 4.3]})
- frame1.index = Index(["x", "y", "z"])
- frame2.index = Index(["x", "y", "q"])
-
- v1 = concat([frame1, frame2], axis=1, ignore_index=True)
-
- nan = np.nan
- expected = DataFrame([[nan, nan, nan, 4.3],
- ['a', 1, 4.5, 5.2],
- ['b', 2, 3.2, 2.2],
- ['c', 3, 1.2, nan]],
- index=Index(["q", "x", "y", "z"]))
-
- tm.assert_frame_equal(v1, expected)
-
- def test_concat_multiindex_with_keys(self):
- index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
- ['one', 'two', 'three']],
- labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
- [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
- names=['first', 'second'])
- frame = DataFrame(np.random.randn(10, 3), index=index,
- columns=Index(['A', 'B', 'C'], name='exp'))
- result = concat([frame, frame], keys=[0, 1], names=['iteration'])
-
- self.assertEqual(result.index.names, ('iteration',) + index.names)
- tm.assert_frame_equal(result.ix[0], frame)
- tm.assert_frame_equal(result.ix[1], frame)
- self.assertEqual(result.index.nlevels, 3)
-
- def test_concat_multiindex_with_tz(self):
- # GH 6606
- df = DataFrame({'dt': [datetime(2014, 1, 1),
- datetime(2014, 1, 2),
- datetime(2014, 1, 3)],
- 'b': ['A', 'B', 'C'],
- 'c': [1, 2, 3], 'd': [4, 5, 6]})
- df['dt'] = df['dt'].apply(lambda d: Timestamp(d, tz='US/Pacific'))
- df = df.set_index(['dt', 'b'])
-
- exp_idx1 = DatetimeIndex(['2014-01-01', '2014-01-02',
- '2014-01-03'] * 2,
- tz='US/Pacific', name='dt')
- exp_idx2 = Index(['A', 'B', 'C'] * 2, name='b')
- exp_idx = MultiIndex.from_arrays([exp_idx1, exp_idx2])
- expected = DataFrame({'c': [1, 2, 3] * 2, 'd': [4, 5, 6] * 2},
- index=exp_idx, columns=['c', 'd'])
-
- result = concat([df, df])
- tm.assert_frame_equal(result, expected)
-
- def test_concat_keys_and_levels(self):
- df = DataFrame(np.random.randn(1, 3))
- df2 = DataFrame(np.random.randn(1, 4))
-
- levels = [['foo', 'baz'], ['one', 'two']]
- names = ['first', 'second']
- result = concat([df, df2, df, df2],
- keys=[('foo', 'one'), ('foo', 'two'),
- ('baz', 'one'), ('baz', 'two')],
- levels=levels,
- names=names)
- expected = concat([df, df2, df, df2])
- exp_index = MultiIndex(levels=levels + [[0]],
- labels=[[0, 0, 1, 1], [0, 1, 0, 1],
- [0, 0, 0, 0]],
- names=names + [None])
- expected.index = exp_index
-
- assert_frame_equal(result, expected)
-
- # no names
-
- result = concat([df, df2, df, df2],
- keys=[('foo', 'one'), ('foo', 'two'),
- ('baz', 'one'), ('baz', 'two')],
- levels=levels)
- self.assertEqual(result.index.names, (None,) * 3)
-
- # no levels
- result = concat([df, df2, df, df2],
- keys=[('foo', 'one'), ('foo', 'two'),
- ('baz', 'one'), ('baz', 'two')],
- names=['first', 'second'])
- self.assertEqual(result.index.names, ('first', 'second') + (None,))
- self.assert_numpy_array_equal(result.index.levels[0], ['baz', 'foo'])
-
- def test_concat_keys_levels_no_overlap(self):
- # GH #1406
- df = DataFrame(np.random.randn(1, 3), index=['a'])
- df2 = DataFrame(np.random.randn(1, 4), index=['b'])
-
- self.assertRaises(ValueError, concat, [df, df],
- keys=['one', 'two'], levels=[['foo', 'bar', 'baz']])
-
- self.assertRaises(ValueError, concat, [df, df2],
- keys=['one', 'two'], levels=[['foo', 'bar', 'baz']])
-
- def test_concat_rename_index(self):
- a = DataFrame(np.random.rand(3, 3),
- columns=list('ABC'),
- index=Index(list('abc'), name='index_a'))
- b = DataFrame(np.random.rand(3, 3),
- columns=list('ABC'),
- index=Index(list('abc'), name='index_b'))
-
- result = concat([a, b], keys=['key0', 'key1'],
- names=['lvl0', 'lvl1'])
-
- exp = concat([a, b], keys=['key0', 'key1'], names=['lvl0'])
- names = list(exp.index.names)
- names[1] = 'lvl1'
- exp.index.set_names(names, inplace=True)
-
- tm.assert_frame_equal(result, exp)
- self.assertEqual(result.index.names, exp.index.names)
-
- def test_crossed_dtypes_weird_corner(self):
- columns = ['A', 'B', 'C', 'D']
- df1 = DataFrame({'A': np.array([1, 2, 3, 4], dtype='f8'),
- 'B': np.array([1, 2, 3, 4], dtype='i8'),
- 'C': np.array([1, 2, 3, 4], dtype='f8'),
- 'D': np.array([1, 2, 3, 4], dtype='i8')},
- columns=columns)
-
- df2 = DataFrame({'A': np.array([1, 2, 3, 4], dtype='i8'),
- 'B': np.array([1, 2, 3, 4], dtype='f8'),
- 'C': np.array([1, 2, 3, 4], dtype='i8'),
- 'D': np.array([1, 2, 3, 4], dtype='f8')},
- columns=columns)
-
- appended = df1.append(df2, ignore_index=True)
- expected = DataFrame(np.concatenate([df1.values, df2.values], axis=0),
- columns=columns)
- tm.assert_frame_equal(appended, expected)
-
- df = DataFrame(np.random.randn(1, 3), index=['a'])
- df2 = DataFrame(np.random.randn(1, 4), index=['b'])
- result = concat(
- [df, df2], keys=['one', 'two'], names=['first', 'second'])
- self.assertEqual(result.index.names, ('first', 'second'))
-
- def test_dups_index(self):
- # GH 4771
-
- # single dtypes
- df = DataFrame(np.random.randint(0, 10, size=40).reshape(
- 10, 4), columns=['A', 'A', 'C', 'C'])
-
- result = concat([df, df], axis=1)
- assert_frame_equal(result.iloc[:, :4], df)
- assert_frame_equal(result.iloc[:, 4:], df)
-
- result = concat([df, df], axis=0)
- assert_frame_equal(result.iloc[:10], df)
- assert_frame_equal(result.iloc[10:], df)
-
- # multi dtypes
- df = concat([DataFrame(np.random.randn(10, 4),
- columns=['A', 'A', 'B', 'B']),
- DataFrame(np.random.randint(0, 10, size=20)
- .reshape(10, 2),
- columns=['A', 'C'])],
- axis=1)
-
- result = concat([df, df], axis=1)
- assert_frame_equal(result.iloc[:, :6], df)
- assert_frame_equal(result.iloc[:, 6:], df)
-
- result = concat([df, df], axis=0)
- assert_frame_equal(result.iloc[:10], df)
- assert_frame_equal(result.iloc[10:], df)
-
- # append
- result = df.iloc[0:8, :].append(df.iloc[8:])
- assert_frame_equal(result, df)
-
- result = df.iloc[0:8, :].append(df.iloc[8:9]).append(df.iloc[9:10])
- assert_frame_equal(result, df)
-
- expected = concat([df, df], axis=0)
- result = df.append(df)
- assert_frame_equal(result, expected)
-
- def test_with_mixed_tuples(self):
- # 10697
- # columns have mixed tuples, so handle properly
- df1 = DataFrame({u'A': 'foo', (u'B', 1): 'bar'}, index=range(2))
- df2 = DataFrame({u'B': 'foo', (u'B', 1): 'bar'}, index=range(2))
-
- # it works
- concat([df1, df2])
-
- def test_join_dups(self):
-
- # joining dups
- df = concat([DataFrame(np.random.randn(10, 4),
- columns=['A', 'A', 'B', 'B']),
- DataFrame(np.random.randint(0, 10, size=20)
- .reshape(10, 2),
- columns=['A', 'C'])],
- axis=1)
-
- expected = concat([df, df], axis=1)
- result = df.join(df, rsuffix='_2')
- result.columns = expected.columns
- assert_frame_equal(result, expected)
-
- # GH 4975, invalid join on dups
- w = DataFrame(np.random.randn(4, 2), columns=["x", "y"])
- x = DataFrame(np.random.randn(4, 2), columns=["x", "y"])
- y = DataFrame(np.random.randn(4, 2), columns=["x", "y"])
- z = DataFrame(np.random.randn(4, 2), columns=["x", "y"])
-
- dta = x.merge(y, left_index=True, right_index=True).merge(
- z, left_index=True, right_index=True, how="outer")
- dta = dta.merge(w, left_index=True, right_index=True)
- expected = concat([x, y, z, w], axis=1)
- expected.columns = ['x_x', 'y_x', 'x_y',
- 'y_y', 'x_x', 'y_x', 'x_y', 'y_y']
- assert_frame_equal(dta, expected)
-
- def test_handle_empty_objects(self):
- df = DataFrame(np.random.randn(10, 4), columns=list('abcd'))
-
- baz = df[:5].copy()
- baz['foo'] = 'bar'
- empty = df[5:5]
-
- frames = [baz, empty, empty, df[5:]]
- concatted = concat(frames, axis=0)
-
- expected = df.ix[:, ['a', 'b', 'c', 'd', 'foo']]
- expected['foo'] = expected['foo'].astype('O')
- expected.loc[0:4, 'foo'] = 'bar'
-
- tm.assert_frame_equal(concatted, expected)
-
- # empty as first element with time series
- # GH3259
- df = DataFrame(dict(A=range(10000)), index=date_range(
- '20130101', periods=10000, freq='s'))
- empty = DataFrame()
- result = concat([df, empty], axis=1)
- assert_frame_equal(result, df)
- result = concat([empty, df], axis=1)
- assert_frame_equal(result, df)
-
- result = concat([df, empty])
- assert_frame_equal(result, df)
- result = concat([empty, df])
- assert_frame_equal(result, df)
-
- def test_concat_mixed_objs(self):
-
- # concat mixed series/frames
- # G2385
-
- # axis 1
- index = date_range('01-Jan-2013', periods=10, freq='H')
- arr = np.arange(10, dtype='int64')
- s1 = Series(arr, index=index)
- s2 = Series(arr, index=index)
- df = DataFrame(arr.reshape(-1, 1), index=index)
-
- expected = DataFrame(np.repeat(arr, 2).reshape(-1, 2),
- index=index, columns=[0, 0])
- result = concat([df, df], axis=1)
- assert_frame_equal(result, expected)
-
- expected = DataFrame(np.repeat(arr, 2).reshape(-1, 2),
- index=index, columns=[0, 1])
- result = concat([s1, s2], axis=1)
- assert_frame_equal(result, expected)
-
- expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3),
- index=index, columns=[0, 1, 2])
- result = concat([s1, s2, s1], axis=1)
- assert_frame_equal(result, expected)
-
- expected = DataFrame(np.repeat(arr, 5).reshape(-1, 5),
- index=index, columns=[0, 0, 1, 2, 3])
- result = concat([s1, df, s2, s2, s1], axis=1)
- assert_frame_equal(result, expected)
-
- # with names
- s1.name = 'foo'
- expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3),
- index=index, columns=['foo', 0, 0])
- result = concat([s1, df, s2], axis=1)
- assert_frame_equal(result, expected)
-
- s2.name = 'bar'
- expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3),
- index=index, columns=['foo', 0, 'bar'])
- result = concat([s1, df, s2], axis=1)
- assert_frame_equal(result, expected)
-
- # ignore index
- expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3),
- index=index, columns=[0, 1, 2])
- result = concat([s1, df, s2], axis=1, ignore_index=True)
- assert_frame_equal(result, expected)
-
- # axis 0
- expected = DataFrame(np.tile(arr, 3).reshape(-1, 1),
- index=index.tolist() * 3, columns=[0])
- result = concat([s1, df, s2])
- assert_frame_equal(result, expected)
-
- expected = DataFrame(np.tile(arr, 3).reshape(-1, 1), columns=[0])
- result = concat([s1, df, s2], ignore_index=True)
- assert_frame_equal(result, expected)
-
- # invalid concatente of mixed dims
- panel = tm.makePanel()
- self.assertRaises(ValueError, lambda: concat([panel, s1], axis=1))
-
- def test_panel_join(self):
- panel = tm.makePanel()
- tm.add_nans(panel)
-
- p1 = panel.ix[:2, :10, :3]
- p2 = panel.ix[2:, 5:, 2:]
-
- # left join
- result = p1.join(p2)
- expected = p1.copy()
- expected['ItemC'] = p2['ItemC']
- tm.assert_panel_equal(result, expected)
-
- # right join
- result = p1.join(p2, how='right')
- expected = p2.copy()
- expected['ItemA'] = p1['ItemA']
- expected['ItemB'] = p1['ItemB']
- expected = expected.reindex(items=['ItemA', 'ItemB', 'ItemC'])
- tm.assert_panel_equal(result, expected)
-
- # inner join
- result = p1.join(p2, how='inner')
- expected = panel.ix[:, 5:10, 2:3]
- tm.assert_panel_equal(result, expected)
-
- # outer join
- result = p1.join(p2, how='outer')
- expected = p1.reindex(major=panel.major_axis,
- minor=panel.minor_axis)
- expected = expected.join(p2.reindex(major=panel.major_axis,
- minor=panel.minor_axis))
- tm.assert_panel_equal(result, expected)
-
- def test_panel_join_overlap(self):
- panel = tm.makePanel()
- tm.add_nans(panel)
-
- p1 = panel.ix[['ItemA', 'ItemB', 'ItemC']]
- p2 = panel.ix[['ItemB', 'ItemC']]
-
- # Expected index is
- #
- # ItemA, ItemB_p1, ItemC_p1, ItemB_p2, ItemC_p2
- joined = p1.join(p2, lsuffix='_p1', rsuffix='_p2')
- p1_suf = p1.ix[['ItemB', 'ItemC']].add_suffix('_p1')
- p2_suf = p2.ix[['ItemB', 'ItemC']].add_suffix('_p2')
- no_overlap = panel.ix[['ItemA']]
- expected = no_overlap.join(p1_suf.join(p2_suf))
- tm.assert_panel_equal(joined, expected)
-
- def test_panel_join_many(self):
- tm.K = 10
- panel = tm.makePanel()
- tm.K = 4
-
- panels = [panel.ix[:2], panel.ix[2:6], panel.ix[6:]]
-
- joined = panels[0].join(panels[1:])
- tm.assert_panel_equal(joined, panel)
-
- panels = [panel.ix[:2, :-5], panel.ix[2:6, 2:], panel.ix[6:, 5:-7]]
-
- data_dict = {}
- for p in panels:
- data_dict.update(p.iteritems())
-
- joined = panels[0].join(panels[1:], how='inner')
- expected = Panel.from_dict(data_dict, intersect=True)
- tm.assert_panel_equal(joined, expected)
-
- joined = panels[0].join(panels[1:], how='outer')
- expected = Panel.from_dict(data_dict, intersect=False)
- tm.assert_panel_equal(joined, expected)
-
- # edge cases
- self.assertRaises(ValueError, panels[0].join, panels[1:],
- how='outer', lsuffix='foo', rsuffix='bar')
- self.assertRaises(ValueError, panels[0].join, panels[1:],
- how='right')
-
- def test_panel_concat_other_axes(self):
- panel = tm.makePanel()
-
- p1 = panel.ix[:, :5, :]
- p2 = panel.ix[:, 5:, :]
-
- result = concat([p1, p2], axis=1)
- tm.assert_panel_equal(result, panel)
-
- p1 = panel.ix[:, :, :2]
- p2 = panel.ix[:, :, 2:]
-
- result = concat([p1, p2], axis=2)
- tm.assert_panel_equal(result, panel)
-
- # if things are a bit misbehaved
- p1 = panel.ix[:2, :, :2]
- p2 = panel.ix[:, :, 2:]
- p1['ItemC'] = 'baz'
-
- result = concat([p1, p2], axis=2)
-
- expected = panel.copy()
- expected['ItemC'] = expected['ItemC'].astype('O')
- expected.ix['ItemC', :, :2] = 'baz'
- tm.assert_panel_equal(result, expected)
-
- def test_panel_concat_buglet(self):
- # #2257
- def make_panel():
- index = 5
- cols = 3
-
- def df():
- return DataFrame(np.random.randn(index, cols),
- index=["I%s" % i for i in range(index)],
- columns=["C%s" % i for i in range(cols)])
- return Panel(dict([("Item%s" % x, df()) for x in ['A', 'B', 'C']]))
-
- panel1 = make_panel()
- panel2 = make_panel()
-
- panel2 = panel2.rename_axis(dict([(x, "%s_1" % x)
- for x in panel2.major_axis]),
- axis=1)
-
- panel3 = panel2.rename_axis(lambda x: '%s_1' % x, axis=1)
- panel3 = panel3.rename_axis(lambda x: '%s_1' % x, axis=2)
-
- # it works!
- concat([panel1, panel3], axis=1, verify_integrity=True)
-
- def test_panel4d_concat(self):
- p4d = tm.makePanel4D()
-
- p1 = p4d.ix[:, :, :5, :]
- p2 = p4d.ix[:, :, 5:, :]
-
- result = concat([p1, p2], axis=2)
- tm.assert_panel4d_equal(result, p4d)
-
- p1 = p4d.ix[:, :, :, :2]
- p2 = p4d.ix[:, :, :, 2:]
-
- result = concat([p1, p2], axis=3)
- tm.assert_panel4d_equal(result, p4d)
-
- def test_panel4d_concat_mixed_type(self):
- p4d = tm.makePanel4D()
-
- # if things are a bit misbehaved
- p1 = p4d.ix[:, :2, :, :2]
- p2 = p4d.ix[:, :, :, 2:]
- p1['L5'] = 'baz'
-
- result = concat([p1, p2], axis=3)
-
- p2['L5'] = np.nan
- expected = concat([p1, p2], axis=3)
- expected = expected.ix[result.labels]
-
- tm.assert_panel4d_equal(result, expected)
-
- def test_concat_series(self):
-
- ts = tm.makeTimeSeries()
- ts.name = 'foo'
-
- pieces = [ts[:5], ts[5:15], ts[15:]]
-
- result = concat(pieces)
- tm.assert_series_equal(result, ts)
- self.assertEqual(result.name, ts.name)
-
- result = concat(pieces, keys=[0, 1, 2])
- expected = ts.copy()
-
- ts.index = DatetimeIndex(np.array(ts.index.values, dtype='M8[ns]'))
-
- exp_labels = [np.repeat([0, 1, 2], [len(x) for x in pieces]),
- np.arange(len(ts))]
- exp_index = MultiIndex(levels=[[0, 1, 2], ts.index],
- labels=exp_labels)
- expected.index = exp_index
- tm.assert_series_equal(result, expected)
-
- def test_concat_series_axis1(self):
- ts = tm.makeTimeSeries()
-
- pieces = [ts[:-2], ts[2:], ts[2:-2]]
-
- result = concat(pieces, axis=1)
- expected = DataFrame(pieces).T
- assert_frame_equal(result, expected)
-
- result = concat(pieces, keys=['A', 'B', 'C'], axis=1)
- expected = DataFrame(pieces, index=['A', 'B', 'C']).T
- assert_frame_equal(result, expected)
-
- # preserve series names, #2489
- s = Series(randn(5), name='A')
- s2 = Series(randn(5), name='B')
-
- result = concat([s, s2], axis=1)
- expected = DataFrame({'A': s, 'B': s2})
- assert_frame_equal(result, expected)
-
- s2.name = None
- result = concat([s, s2], axis=1)
- self.assertTrue(np.array_equal(
- result.columns, Index(['A', 0], dtype='object')))
-
- # must reindex, #2603
- s = Series(randn(3), index=['c', 'a', 'b'], name='A')
- s2 = Series(randn(4), index=['d', 'a', 'b', 'c'], name='B')
- result = concat([s, s2], axis=1)
- expected = DataFrame({'A': s, 'B': s2})
- assert_frame_equal(result, expected)
-
- def test_concat_single_with_key(self):
- df = DataFrame(np.random.randn(10, 4))
-
- result = concat([df], keys=['foo'])
- expected = concat([df, df], keys=['foo', 'bar'])
- tm.assert_frame_equal(result, expected[:10])
-
- def test_concat_exclude_none(self):
- df = DataFrame(np.random.randn(10, 4))
-
- pieces = [df[:5], None, None, df[5:]]
- result = concat(pieces)
- tm.assert_frame_equal(result, df)
- self.assertRaises(ValueError, concat, [None, None])
-
- def test_concat_datetime64_block(self):
- from pandas.tseries.index import date_range
-
- rng = date_range('1/1/2000', periods=10)
-
- df = DataFrame({'time': rng})
-
- result = concat([df, df])
- self.assertTrue((result.iloc[:10]['time'] == rng).all())
- self.assertTrue((result.iloc[10:]['time'] == rng).all())
-
- def test_concat_timedelta64_block(self):
- from pandas import to_timedelta
-
- rng = to_timedelta(np.arange(10), unit='s')
-
- df = DataFrame({'time': rng})
-
- result = concat([df, df])
- self.assertTrue((result.iloc[:10]['time'] == rng).all())
- self.assertTrue((result.iloc[10:]['time'] == rng).all())
-
- def test_concat_keys_with_none(self):
- # #1649
- df0 = DataFrame([[10, 20, 30], [10, 20, 30], [10, 20, 30]])
-
- result = concat(dict(a=None, b=df0, c=df0[:2], d=df0[:1], e=df0))
- expected = concat(dict(b=df0, c=df0[:2], d=df0[:1], e=df0))
- tm.assert_frame_equal(result, expected)
-
- result = concat([None, df0, df0[:2], df0[:1], df0],
- keys=['a', 'b', 'c', 'd', 'e'])
- expected = concat([df0, df0[:2], df0[:1], df0],
- keys=['b', 'c', 'd', 'e'])
- tm.assert_frame_equal(result, expected)
-
- def test_concat_bug_1719(self):
- ts1 = tm.makeTimeSeries()
- ts2 = tm.makeTimeSeries()[::2]
-
- # to join with union
- # these two are of different length!
- left = concat([ts1, ts2], join='outer', axis=1)
- right = concat([ts2, ts1], join='outer', axis=1)
-
- self.assertEqual(len(left), len(right))
-
- def test_concat_bug_2972(self):
- ts0 = Series(np.zeros(5))
- ts1 = Series(np.ones(5))
- ts0.name = ts1.name = 'same name'
- result = concat([ts0, ts1], axis=1)
-
- expected = DataFrame({0: ts0, 1: ts1})
- expected.columns = ['same name', 'same name']
- assert_frame_equal(result, expected)
-
- def test_concat_bug_3602(self):
-
- # GH 3602, duplicate columns
- df1 = DataFrame({'firmNo': [0, 0, 0, 0], 'stringvar': [
- 'rrr', 'rrr', 'rrr', 'rrr'], 'prc': [6, 6, 6, 6]})
- df2 = DataFrame({'misc': [1, 2, 3, 4], 'prc': [
- 6, 6, 6, 6], 'C': [9, 10, 11, 12]})
- expected = DataFrame([[0, 6, 'rrr', 9, 1, 6],
- [0, 6, 'rrr', 10, 2, 6],
- [0, 6, 'rrr', 11, 3, 6],
- [0, 6, 'rrr', 12, 4, 6]])
- expected.columns = ['firmNo', 'prc', 'stringvar', 'C', 'misc', 'prc']
-
- result = concat([df1, df2], axis=1)
- assert_frame_equal(result, expected)
-
- def test_concat_series_axis1_same_names_ignore_index(self):
- dates = date_range('01-Jan-2013', '01-Jan-2014', freq='MS')[0:-1]
- s1 = Series(randn(len(dates)), index=dates, name='value')
- s2 = Series(randn(len(dates)), index=dates, name='value')
-
- result = concat([s1, s2], axis=1, ignore_index=True)
- self.assertTrue(np.array_equal(result.columns, [0, 1]))
-
- def test_concat_iterables(self):
- from collections import deque, Iterable
-
- # GH8645 check concat works with tuples, list, generators, and weird
- # stuff like deque and custom iterables
- df1 = DataFrame([1, 2, 3])
- df2 = DataFrame([4, 5, 6])
- expected = DataFrame([1, 2, 3, 4, 5, 6])
- assert_frame_equal(concat((df1, df2), ignore_index=True), expected)
- assert_frame_equal(concat([df1, df2], ignore_index=True), expected)
- assert_frame_equal(concat((df for df in (df1, df2)),
- ignore_index=True), expected)
- assert_frame_equal(
- concat(deque((df1, df2)), ignore_index=True), expected)
-
- class CustomIterator1(object):
-
- def __len__(self):
- return 2
-
- def __getitem__(self, index):
- try:
- return {0: df1, 1: df2}[index]
- except KeyError:
- raise IndexError
- assert_frame_equal(pd.concat(CustomIterator1(),
- ignore_index=True), expected)
-
- class CustomIterator2(Iterable):
-
- def __iter__(self):
- yield df1
- yield df2
- assert_frame_equal(pd.concat(CustomIterator2(),
- ignore_index=True), expected)
-
- def test_concat_invalid(self):
-
- # trying to concat a ndframe with a non-ndframe
- df1 = mkdf(10, 2)
- for obj in [1, dict(), [1, 2], (1, 2)]:
- self.assertRaises(TypeError, lambda x: concat([df1, obj]))
-
- def test_concat_invalid_first_argument(self):
- df1 = mkdf(10, 2)
- df2 = mkdf(10, 2)
- self.assertRaises(TypeError, concat, df1, df2)
-
- # generator ok though
- concat(DataFrame(np.random.rand(5, 5)) for _ in range(3))
-
- # text reader ok
- # GH6583
- data = """index,A,B,C,D
-foo,2,3,4,5
-bar,7,8,9,10
-baz,12,13,14,15
-qux,12,13,14,15
-foo2,12,13,14,15
-bar2,12,13,14,15
-"""
-
- reader = read_csv(StringIO(data), chunksize=1)
- result = concat(reader, ignore_index=True)
- expected = read_csv(StringIO(data))
- assert_frame_equal(result, expected)
-
-
-class TestOrderedMerge(tm.TestCase):
-
- def setUp(self):
- self.left = DataFrame({'key': ['a', 'c', 'e'],
- 'lvalue': [1, 2., 3]})
-
- self.right = DataFrame({'key': ['b', 'c', 'd', 'f'],
- 'rvalue': [1, 2, 3., 4]})
-
- # GH #813
-
- def test_basic(self):
- result = ordered_merge(self.left, self.right, on='key')
- expected = DataFrame({'key': ['a', 'b', 'c', 'd', 'e', 'f'],
- 'lvalue': [1, nan, 2, nan, 3, nan],
- 'rvalue': [nan, 1, 2, 3, nan, 4]})
-
- assert_frame_equal(result, expected)
-
- def test_ffill(self):
- result = ordered_merge(
- self.left, self.right, on='key', fill_method='ffill')
- expected = DataFrame({'key': ['a', 'b', 'c', 'd', 'e', 'f'],
- 'lvalue': [1., 1, 2, 2, 3, 3.],
- 'rvalue': [nan, 1, 2, 3, 3, 4]})
- assert_frame_equal(result, expected)
-
- def test_multigroup(self):
- left = concat([self.left, self.left], ignore_index=True)
- # right = concat([self.right, self.right], ignore_index=True)
-
- left['group'] = ['a'] * 3 + ['b'] * 3
- # right['group'] = ['a'] * 4 + ['b'] * 4
-
- result = ordered_merge(left, self.right, on='key', left_by='group',
- fill_method='ffill')
- expected = DataFrame({'key': ['a', 'b', 'c', 'd', 'e', 'f'] * 2,
- 'lvalue': [1., 1, 2, 2, 3, 3.] * 2,
- 'rvalue': [nan, 1, 2, 3, 3, 4] * 2})
- expected['group'] = ['a'] * 6 + ['b'] * 6
-
- assert_frame_equal(result, expected.ix[:, result.columns])
-
- result2 = ordered_merge(self.right, left, on='key', right_by='group',
- fill_method='ffill')
- assert_frame_equal(result, result2.ix[:, result.columns])
-
- result = ordered_merge(left, self.right, on='key', left_by='group')
- self.assertTrue(result['group'].notnull().all())
-
- def test_merge_type(self):
- class NotADataFrame(DataFrame):
-
- @property
- def _constructor(self):
- return NotADataFrame
-
- nad = NotADataFrame(self.left)
- result = nad.merge(self.right, on='key')
-
- tm.assertIsInstance(result, NotADataFrame)
-
- def test_empty_sequence_concat(self):
- # GH 9157
- empty_pat = "[Nn]o objects"
- none_pat = "objects.*None"
- test_cases = [
- ((), empty_pat),
- ([], empty_pat),
- ({}, empty_pat),
- ([None], none_pat),
- ([None, None], none_pat)
- ]
- for df_seq, pattern in test_cases:
- assertRaisesRegexp(ValueError, pattern, pd.concat, df_seq)
-
- pd.concat([pd.DataFrame()])
- pd.concat([None, pd.DataFrame()])
- pd.concat([pd.DataFrame(), None])
-
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tools/tests/test_ordered_merge.py b/pandas/tools/tests/test_ordered_merge.py
new file mode 100644
index 0000000000000..53f00d9761f32
--- /dev/null
+++ b/pandas/tools/tests/test_ordered_merge.py
@@ -0,0 +1,93 @@
+import nose
+
+import pandas as pd
+from pandas import DataFrame, ordered_merge
+from pandas.util import testing as tm
+from pandas.util.testing import assert_frame_equal
+
+from numpy import nan
+
+
+class TestOrderedMerge(tm.TestCase):
+
+ def setUp(self):
+ self.left = DataFrame({'key': ['a', 'c', 'e'],
+ 'lvalue': [1, 2., 3]})
+
+ self.right = DataFrame({'key': ['b', 'c', 'd', 'f'],
+ 'rvalue': [1, 2, 3., 4]})
+
+ # GH #813
+
+ def test_basic(self):
+ result = ordered_merge(self.left, self.right, on='key')
+ expected = DataFrame({'key': ['a', 'b', 'c', 'd', 'e', 'f'],
+ 'lvalue': [1, nan, 2, nan, 3, nan],
+ 'rvalue': [nan, 1, 2, 3, nan, 4]})
+
+ assert_frame_equal(result, expected)
+
+ def test_ffill(self):
+ result = ordered_merge(
+ self.left, self.right, on='key', fill_method='ffill')
+ expected = DataFrame({'key': ['a', 'b', 'c', 'd', 'e', 'f'],
+ 'lvalue': [1., 1, 2, 2, 3, 3.],
+ 'rvalue': [nan, 1, 2, 3, 3, 4]})
+ assert_frame_equal(result, expected)
+
+ def test_multigroup(self):
+ left = pd.concat([self.left, self.left], ignore_index=True)
+ # right = concat([self.right, self.right], ignore_index=True)
+
+ left['group'] = ['a'] * 3 + ['b'] * 3
+ # right['group'] = ['a'] * 4 + ['b'] * 4
+
+ result = ordered_merge(left, self.right, on='key', left_by='group',
+ fill_method='ffill')
+ expected = DataFrame({'key': ['a', 'b', 'c', 'd', 'e', 'f'] * 2,
+ 'lvalue': [1., 1, 2, 2, 3, 3.] * 2,
+ 'rvalue': [nan, 1, 2, 3, 3, 4] * 2})
+ expected['group'] = ['a'] * 6 + ['b'] * 6
+
+ assert_frame_equal(result, expected.ix[:, result.columns])
+
+ result2 = ordered_merge(self.right, left, on='key', right_by='group',
+ fill_method='ffill')
+ assert_frame_equal(result, result2.ix[:, result.columns])
+
+ result = ordered_merge(left, self.right, on='key', left_by='group')
+ self.assertTrue(result['group'].notnull().all())
+
+ def test_merge_type(self):
+ class NotADataFrame(DataFrame):
+
+ @property
+ def _constructor(self):
+ return NotADataFrame
+
+ nad = NotADataFrame(self.left)
+ result = nad.merge(self.right, on='key')
+
+ tm.assertIsInstance(result, NotADataFrame)
+
+ def test_empty_sequence_concat(self):
+ # GH 9157
+ empty_pat = "[Nn]o objects"
+ none_pat = "objects.*None"
+ test_cases = [
+ ((), empty_pat),
+ ([], empty_pat),
+ ({}, empty_pat),
+ ([None], none_pat),
+ ([None, None], none_pat)
+ ]
+ for df_seq, pattern in test_cases:
+ tm.assertRaisesRegexp(ValueError, pattern, pd.concat, df_seq)
+
+ pd.concat([pd.DataFrame()])
+ pd.concat([None, pd.DataFrame()])
+ pd.concat([pd.DataFrame(), None])
+
+if __name__ == '__main__':
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
+ exit=False)
diff --git a/pandas/types/concat.py b/pandas/types/concat.py
index eb18023d6409d..5cd7abb6889b7 100644
--- a/pandas/types/concat.py
+++ b/pandas/types/concat.py
@@ -249,7 +249,7 @@ def convert_to_pydatetime(x, axis):
# thus no need to care
# we require ALL of the same tz for datetimetz
- tzs = set([x.tz for x in to_concat])
+ tzs = set([str(x.tz) for x in to_concat])
if len(tzs) == 1:
from pandas.tseries.index import DatetimeIndex
new_values = np.concatenate([x.tz_localize(None).asi8
| very small bug fix w.r.t. tz concatting
| https://api.github.com/repos/pandas-dev/pandas/pulls/13300 | 2016-05-26T22:21:08Z | 2016-05-26T22:28:35Z | null | 2016-05-26T22:28:35Z |
BUG: Fix describe(): percentiles (#13104), col index (#13288) | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 2b67aca1dcf74..88dcda444a09d 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -228,6 +228,75 @@ resulting dtype will be upcast (unchanged from previous).
pd.merge(df1, df2, how='outer', on='key')
pd.merge(df1, df2, how='outer', on='key').dtypes
+.. _whatsnew_0182.api.describe:
+
+``.describe()`` changes
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Percentile identifiers in the index of a ``.describe()`` output will now be rounded to the least precision that keeps them distinct (:issue:`13104`)
+
+.. ipython:: python
+
+ s = pd.Series([0, 1, 2, 3, 4])
+ df = pd.DataFrame([0, 1, 2, 3, 4])
+
+Previous Behavior:
+
+They were rounded to at most one decimal place, which could raise ``ValueError`` for a data frame.
+
+.. code-block:: ipython
+
+ In [3]: s.describe(percentiles=[0.0001, 0.0005, 0.001, 0.999, 0.9995, 0.9999])
+ Out[3]:
+ count 5.000000
+ mean 2.000000
+ std 1.581139
+ min 0.000000
+ 0.0% 0.000400
+ 0.1% 0.002000
+ 0.1% 0.004000
+ 50% 2.000000
+ 99.9% 3.996000
+ 100.0% 3.998000
+ 100.0% 3.999600
+ max 4.000000
+ dtype: float64
+
+ In [4]: df.describe(percentiles=[0.0001, 0.0005, 0.001, 0.999, 0.9995, 0.9999])
+ Out[4]:
+ ...
+ ValueError: cannot reindex from a duplicate axis
+
+New Behavior:
+
+.. ipython:: python
+
+ s.describe(percentiles=[0.0001, 0.0005, 0.001, 0.999, 0.9995, 0.9999])
+ df.describe(percentiles=[0.0001, 0.0005, 0.001, 0.999, 0.9995, 0.9999])
+
+In addition to this, both ``Series.describe()`` and ``DataFrame.describe()`` will now raise ``ValueError`` if passed ``pecentiles`` contain duplicates.
+
+Another bug is fixed that could raise ``TypeError`` when a column index of a data frame contained entries of different types (:issue:`13288`)
+
+.. ipython:: python
+
+ df = pd.DataFrame({'A': list("BCDE"), 0: [1, 2, 3, 4]})
+
+Previous Behavior:
+
+.. code-block:: ipython
+
+ In [8]: df.describe()
+ Out[8]:
+ ...
+ ValueError: Buffer dtype mismatch, expected 'Python object' but got 'long'
+
+New Behavior:
+
+.. ipython:: python
+
+ df.describe()
+
.. _whatsnew_0182.api.other:
Other API changes
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 99599d2b04a45..9ecaaebc2b523 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -20,6 +20,7 @@
import pandas.core.missing as missing
import pandas.core.datetools as datetools
from pandas.formats.printing import pprint_thing
+from pandas.formats.format import format_percentiles
from pandas import compat
from pandas.compat.numpy import function as nv
from pandas.compat import (map, zip, lrange, string_types,
@@ -4868,32 +4869,33 @@ def abs(self):
@Appender(_shared_docs['describe'] % _shared_doc_kwargs)
def describe(self, percentiles=None, include=None, exclude=None):
if self.ndim >= 3:
- msg = "describe is not implemented on on Panel or PanelND objects."
+ msg = "describe is not implemented on Panel or PanelND objects."
raise NotImplementedError(msg)
+ elif self.ndim == 2 and self.columns.size == 0:
+ raise ValueError("Cannot describe a DataFrame without columns")
if percentiles is not None:
# get them all to be in [0, 1]
self._check_percentile(percentiles)
+
+ # median should always be included
+ if 0.5 not in percentiles:
+ percentiles.append(0.5)
percentiles = np.asarray(percentiles)
else:
percentiles = np.array([0.25, 0.5, 0.75])
- # median should always be included
- if (percentiles != 0.5).all(): # median isn't included
- lh = percentiles[percentiles < .5]
- uh = percentiles[percentiles > .5]
- percentiles = np.hstack([lh, 0.5, uh])
+ # sort and check for duplicates
+ unique_pcts = np.unique(percentiles)
+ if len(unique_pcts) < len(percentiles):
+ raise ValueError("percentiles cannot contain duplicates")
+ percentiles = unique_pcts
- def pretty_name(x):
- x *= 100
- if x == int(x):
- return '%.0f%%' % x
- else:
- return '%.1f%%' % x
+ formatted_percentiles = format_percentiles(percentiles)
- def describe_numeric_1d(series, percentiles):
+ def describe_numeric_1d(series):
stat_index = (['count', 'mean', 'std', 'min'] +
- [pretty_name(x) for x in percentiles] + ['max'])
+ formatted_percentiles + ['max'])
d = ([series.count(), series.mean(), series.std(), series.min()] +
[series.quantile(x) for x in percentiles] + [series.max()])
return pd.Series(d, index=stat_index, name=series.name)
@@ -4918,18 +4920,18 @@ def describe_categorical_1d(data):
return pd.Series(result, index=names, name=data.name)
- def describe_1d(data, percentiles):
+ def describe_1d(data):
if com.is_bool_dtype(data):
return describe_categorical_1d(data)
elif com.is_numeric_dtype(data):
- return describe_numeric_1d(data, percentiles)
+ return describe_numeric_1d(data)
elif com.is_timedelta64_dtype(data):
- return describe_numeric_1d(data, percentiles)
+ return describe_numeric_1d(data)
else:
return describe_categorical_1d(data)
if self.ndim == 1:
- return describe_1d(self, percentiles)
+ return describe_1d(self)
elif (include is None) and (exclude is None):
if len(self._get_numeric_data()._info_axis) > 0:
# when some numerics are found, keep only numerics
@@ -4944,7 +4946,7 @@ def describe_1d(data, percentiles):
else:
data = self.select_dtypes(include=include, exclude=exclude)
- ldesc = [describe_1d(s, percentiles) for _, s in data.iteritems()]
+ ldesc = [describe_1d(s) for _, s in data.iteritems()]
# set a convenient order for rows
names = []
ldesc_indexes = sorted([x.index for x in ldesc], key=len)
@@ -4954,8 +4956,7 @@ def describe_1d(data, percentiles):
names.append(name)
d = pd.concat(ldesc, join_axes=pd.Index([names]), axis=1)
- d.columns = self.columns._shallow_copy(values=d.columns.values)
- d.columns.names = data.columns.names
+ d.columns = data.columns.copy()
return d
def _check_percentile(self, q):
diff --git a/pandas/formats/format.py b/pandas/formats/format.py
index 70b506a1415c1..ecdfbc3cc4c71 100644
--- a/pandas/formats/format.py
+++ b/pandas/formats/format.py
@@ -6,7 +6,7 @@
import sys
from pandas.core.base import PandasObject
-from pandas.core.common import isnull, notnull
+from pandas.core.common import isnull, notnull, is_numeric_dtype
from pandas.core.index import Index, MultiIndex, _ensure_index
from pandas import compat
from pandas.compat import (StringIO, lzip, range, map, zip, reduce, u,
@@ -2260,6 +2260,67 @@ def _format_strings(self):
return fmt_values
+def format_percentiles(percentiles):
+ """
+ Outputs rounded and formatted percentiles.
+
+ Parameters
+ ----------
+ percentiles : list-like, containing floats from interval [0,1]
+
+ Returns
+ -------
+ formatted : list of strings
+
+ Notes
+ -----
+ Rounding precision is chosen so that: (1) if any two elements of
+ ``percentiles`` differ, they remain different after rounding
+ (2) no entry is *rounded* to 0% or 100%.
+ Any non-integer is always rounded to at least 1 decimal place.
+
+ Examples
+ --------
+ Keeps all entries different after rounding:
+
+ >>> format_percentiles([0.01999, 0.02001, 0.5, 0.666666, 0.9999])
+ ['1.999%', '2.001%', '50%', '66.667%', '99.99%']
+
+ No element is rounded to 0% or 100% (unless already equal to it).
+ Duplicates are allowed:
+
+ >>> format_percentiles([0, 0.5, 0.02001, 0.5, 0.666666, 0.9999])
+ ['0%', '50%', '2.0%', '50%', '66.67%', '99.99%']
+ """
+
+ percentiles = np.asarray(percentiles)
+
+ # It checks for np.NaN as well
+ if not is_numeric_dtype(percentiles) or not np.all(percentiles >= 0) \
+ or not np.all(percentiles <= 1):
+ raise ValueError("percentiles should all be in the interval [0,1]")
+
+ percentiles = 100 * percentiles
+ int_idx = (percentiles.astype(int) == percentiles)
+
+ if np.all(int_idx):
+ out = percentiles.astype(int).astype(str)
+ return [i + '%' for i in out]
+
+ unique_pcts = np.unique(percentiles)
+ to_begin = unique_pcts[0] if unique_pcts[0] > 0 else None
+ to_end = 100 - unique_pcts[-1] if unique_pcts[-1] < 100 else None
+ # Least precision that keeps percentiles unique after rounding
+ prec = -np.floor(np.log10(np.min(
+ np.ediff1d(unique_pcts, to_begin=to_begin, to_end=to_end)
+ ))).astype(int)
+ prec = max(1, prec)
+ out = np.empty_like(percentiles, dtype=object)
+ out[int_idx] = percentiles[int_idx].astype(int).astype(str)
+ out[~int_idx] = percentiles[~int_idx].round(prec).astype(str)
+ return [i + '%' for i in out]
+
+
def _is_dates_only(values):
# return a boolean if we are only dates (and don't have a timezone)
values = DatetimeIndex(values)
diff --git a/pandas/tests/formats/test_format.py b/pandas/tests/formats/test_format.py
index 7a806280916f1..e67fe2cddde77 100644
--- a/pandas/tests/formats/test_format.py
+++ b/pandas/tests/formats/test_format.py
@@ -4264,6 +4264,21 @@ def test_nat_representations(self):
self.assertEqual(f(pd.NaT), 'NaT')
+def test_format_percentiles():
+ result = fmt.format_percentiles([0.01999, 0.02001, 0.5, 0.666666, 0.9999])
+ expected = ['1.999%', '2.001%', '50%', '66.667%', '99.99%']
+ tm.assert_equal(result, expected)
+
+ result = fmt.format_percentiles([0, 0.5, 0.02001, 0.5, 0.666666, 0.9999])
+ expected = ['0%', '50%', '2.0%', '50%', '66.67%', '99.99%']
+ tm.assert_equal(result, expected)
+
+ tm.assertRaises(ValueError, fmt.format_percentiles, [0.1, np.nan, 0.5])
+ tm.assertRaises(ValueError, fmt.format_percentiles, [-0.001, 0.1, 0.5])
+ tm.assertRaises(ValueError, fmt.format_percentiles, [2, 0.1, 0.5])
+ tm.assertRaises(ValueError, fmt.format_percentiles, [0.1, 0.5, 'a'])
+
+
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tests/test_generic.py b/pandas/tests/test_generic.py
index 83e1a17fc8b0c..2f4c2b414cc30 100644
--- a/pandas/tests/test_generic.py
+++ b/pandas/tests/test_generic.py
@@ -996,6 +996,59 @@ def test_describe_percentiles_insert_median(self):
self.assertTrue('0%' in d1.index)
self.assertTrue('100%' in d2.index)
+ def test_describe_percentiles_unique(self):
+ # GH13104
+ df = tm.makeDataFrame()
+ with self.assertRaises(ValueError):
+ df.describe(percentiles=[0.1, 0.2, 0.4, 0.5, 0.2, 0.6])
+ with self.assertRaises(ValueError):
+ df.describe(percentiles=[0.1, 0.2, 0.4, 0.2, 0.6])
+
+ def test_describe_percentiles_formatting(self):
+ # GH13104
+ df = tm.makeDataFrame()
+
+ # default
+ result = df.describe().index
+ expected = Index(['count', 'mean', 'std', 'min', '25%', '50%', '75%',
+ 'max'],
+ dtype='object')
+ tm.assert_index_equal(result, expected)
+
+ result = df.describe(percentiles=[0.0001, 0.0005, 0.001, 0.999,
+ 0.9995, 0.9999]).index
+ expected = Index(['count', 'mean', 'std', 'min', '0.01%', '0.05%',
+ '0.1%', '50%', '99.9%', '99.95%', '99.99%', 'max'],
+ dtype='object')
+ tm.assert_index_equal(result, expected)
+
+ result = df.describe(percentiles=[0.00499, 0.005, 0.25, 0.50,
+ 0.75]).index
+ expected = Index(['count', 'mean', 'std', 'min', '0.499%', '0.5%',
+ '25%', '50%', '75%', 'max'],
+ dtype='object')
+ tm.assert_index_equal(result, expected)
+
+ result = df.describe(percentiles=[0.00499, 0.01001, 0.25, 0.50,
+ 0.75]).index
+ expected = Index(['count', 'mean', 'std', 'min', '0.5%', '1.0%',
+ '25%', '50%', '75%', 'max'],
+ dtype='object')
+ tm.assert_index_equal(result, expected)
+
+ def test_describe_column_index_type(self):
+ # GH13288
+ df = pd.DataFrame([1, 2, 3, 4])
+ df.columns = pd.Index([0], dtype=object)
+ result = df.describe().columns
+ expected = Index([0], dtype=object)
+ tm.assert_index_equal(result, expected)
+
+ df = pd.DataFrame({'A': list("BCDE"), 0: [1, 2, 3, 4]})
+ result = df.describe().columns
+ expected = Index([0], dtype=object)
+ tm.assert_index_equal(result, expected)
+
def test_describe_no_numeric(self):
df = DataFrame({'A': ['foo', 'foo', 'bar'] * 8,
'B': ['a', 'b', 'c', 'd'] * 6})
@@ -1010,6 +1063,16 @@ def test_describe_no_numeric(self):
desc = df.describe()
self.assertEqual(desc.time['first'], min(ts.index))
+ def test_describe_empty(self):
+ df = DataFrame()
+ tm.assertRaisesRegexp(ValueError, 'DataFrame without columns',
+ df.describe)
+
+ df = DataFrame(columns=['A', 'B'])
+ result = df.describe()
+ expected = DataFrame(0, columns=['A', 'B'], index=['count', 'unique'])
+ tm.assert_frame_equal(result, expected)
+
def test_describe_empty_int_columns(self):
df = DataFrame([[0, 1], [1, 2]])
desc = df[df[0] < 0].describe() # works
| - [x] closes #13104, #13288
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
BUG #13104:
- Percentiles are now rounded to the least precision that keeps
them unique.
- Supplying duplicates in percentiles will raise ValueError.
BUG #13288
- Fixed a column index of the output data frame.
Previously, if a data frame had a column index of object type and
the index contained numeric values, the output column index could
be corrupt. It led to ValueError if the output was displayed.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13298 | 2016-05-26T20:39:43Z | 2016-05-31T14:12:51Z | null | 2016-06-09T04:04:38Z |
DOC: low_memory in read_csv | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 104172d9574f1..6cf41bbc50fb5 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -169,6 +169,13 @@ skipfooter : int, default ``0``
Number of lines at bottom of file to skip (unsupported with engine='c').
nrows : int, default ``None``
Number of rows of file to read. Useful for reading pieces of large files.
+low_memory : boolean, default ``True``
+ Internally process the file in chunks, resulting in lower memory use
+ while parsing, but possibly mixed type inference. To ensure no mixed
+ types either set ``False``, or specify the type with the ``dtype`` parameter.
+ Note that the entire file is read into a single DataFrame regardless,
+ use the ``chunksize`` or ``iterator`` parameter to return the data in chunks.
+ (Only valid with C parser)
NA and Missing Data Handling
++++++++++++++++++++++++++++
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 95a7f63075167..bf4083f61155c 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -220,6 +220,13 @@
warn_bad_lines : boolean, default True
If error_bad_lines is False, and warn_bad_lines is True, a warning for each
"bad line" will be output. (Only valid with C parser).
+low_memory : boolean, default True
+ Internally process the file in chunks, resulting in lower memory use
+ while parsing, but possibly mixed type inference. To ensure no mixed
+ types either set False, or specify the type with the `dtype` parameter.
+ Note that the entire file is read into a single DataFrame regardless,
+ use the `chunksize` or `iterator` parameter to return the data in chunks.
+ (Only valid with C parser)
Returns
-------
| - [x] closes #5888, xref #12686
- [x] passes `git diff upstream/master | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/13293 | 2016-05-26T11:06:58Z | 2016-05-26T23:56:09Z | null | 2016-11-14T08:38:07Z |
COMPAT: extension dtypes (DatetimeTZ, Categorical) are now Singleton cached objects | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index ebae54f292e3c..3e31858bb3683 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -242,7 +242,7 @@ Bug Fixes
- Bug in ``Series`` arithmetic raises ``TypeError`` if it contains datetime-like as ``object`` dtype (:issue:`13043`)
-
+- Bug in extension dtype creation where the created types were not is/identical (:issue:`13285`)
- Bug in ``NaT`` - ``Period`` raises ``AttributeError`` (:issue:`13071`)
- Bug in ``Period`` addition raises ``TypeError`` if ``Period`` is on right hand side (:issue:`13069`)
diff --git a/pandas/tests/types/test_dtypes.py b/pandas/tests/types/test_dtypes.py
index 2a9ad30a07805..d48b9baf64777 100644
--- a/pandas/tests/types/test_dtypes.py
+++ b/pandas/tests/types/test_dtypes.py
@@ -45,6 +45,16 @@ class TestCategoricalDtype(Base, tm.TestCase):
def setUp(self):
self.dtype = CategoricalDtype()
+ def test_hash_vs_equality(self):
+ # make sure that we satisfy is semantics
+ dtype = self.dtype
+ dtype2 = CategoricalDtype()
+ self.assertTrue(dtype == dtype2)
+ self.assertTrue(dtype2 == dtype)
+ self.assertTrue(dtype is dtype2)
+ self.assertTrue(dtype2 is dtype)
+ self.assertTrue(hash(dtype) == hash(dtype2))
+
def test_equality(self):
self.assertTrue(is_dtype_equal(self.dtype, 'category'))
self.assertTrue(is_dtype_equal(self.dtype, CategoricalDtype()))
@@ -88,6 +98,20 @@ class TestDatetimeTZDtype(Base, tm.TestCase):
def setUp(self):
self.dtype = DatetimeTZDtype('ns', 'US/Eastern')
+ def test_hash_vs_equality(self):
+ # make sure that we satisfy is semantics
+ dtype = self.dtype
+ dtype2 = DatetimeTZDtype('ns', 'US/Eastern')
+ dtype3 = DatetimeTZDtype(dtype2)
+ self.assertTrue(dtype == dtype2)
+ self.assertTrue(dtype2 == dtype)
+ self.assertTrue(dtype3 == dtype)
+ self.assertTrue(dtype is dtype2)
+ self.assertTrue(dtype2 is dtype)
+ self.assertTrue(dtype3 is dtype)
+ self.assertTrue(hash(dtype) == hash(dtype2))
+ self.assertTrue(hash(dtype) == hash(dtype3))
+
def test_construction(self):
self.assertRaises(ValueError,
lambda: DatetimeTZDtype('ms', 'US/Eastern'))
diff --git a/pandas/types/dtypes.py b/pandas/types/dtypes.py
index e6adbc8500117..140d494c3e1b2 100644
--- a/pandas/types/dtypes.py
+++ b/pandas/types/dtypes.py
@@ -108,6 +108,16 @@ class CategoricalDtype(ExtensionDtype):
kind = 'O'
str = '|O08'
base = np.dtype('O')
+ _cache = {}
+
+ def __new__(cls):
+
+ try:
+ return cls._cache[cls.name]
+ except KeyError:
+ c = object.__new__(cls)
+ cls._cache[cls.name] = c
+ return c
def __hash__(self):
# make myself hashable
@@ -155,9 +165,11 @@ class DatetimeTZDtype(ExtensionDtype):
base = np.dtype('M8[ns]')
_metadata = ['unit', 'tz']
_match = re.compile("(datetime64|M8)\[(?P<unit>.+), (?P<tz>.+)\]")
+ _cache = {}
+
+ def __new__(cls, unit=None, tz=None):
+ """ Create a new unit if needed, otherwise return from the cache
- def __init__(self, unit, tz=None):
- """
Parameters
----------
unit : string unit that this represents, currently must be 'ns'
@@ -165,28 +177,46 @@ def __init__(self, unit, tz=None):
"""
if isinstance(unit, DatetimeTZDtype):
- self.unit, self.tz = unit.unit, unit.tz
- return
+ unit, tz = unit.unit, unit.tz
- if tz is None:
+ elif unit is None:
+ # we are called as an empty constructor
+ # generally for pickle compat
+ return object.__new__(cls)
+
+ elif tz is None:
# we were passed a string that we can construct
try:
- m = self._match.search(unit)
+ m = cls._match.search(unit)
if m is not None:
- self.unit = m.groupdict()['unit']
- self.tz = m.groupdict()['tz']
- return
+ unit = m.groupdict()['unit']
+ tz = m.groupdict()['tz']
except:
raise ValueError("could not construct DatetimeTZDtype")
+ elif isinstance(unit, compat.string_types):
+
+ if unit != 'ns':
+ raise ValueError("DatetimeTZDtype only supports ns units")
+
+ unit = unit
+ tz = tz
+
+ if tz is None:
raise ValueError("DatetimeTZDtype constructor must have a tz "
"supplied")
- if unit != 'ns':
- raise ValueError("DatetimeTZDtype only supports ns units")
- self.unit = unit
- self.tz = tz
+ # set/retrieve from cache
+ key = (unit, str(tz))
+ try:
+ return cls._cache[key]
+ except KeyError:
+ u = object.__new__(cls)
+ u.unit = unit
+ u.tz = tz
+ cls._cache[key] = u
+ return u
@classmethod
def construct_from_string(cls, string):
| allows for proper is / == comparisons
Had this odd semantic difference as these were really different objects (though they DID hash the same)
This doesn't actually affect any user code.
```
In [1]: from pandas.core import common as com
In [2]: t1 = com.DatetimeTZDtype('datetime64[ns, US/Eastern]')
In [3]: t2 = com.DatetimeTZDtype('datetime64[ns, US/Eastern]')
In [4]: t1 == t2
Out[4]: True
In [5]: t1 is t2
Out[5]: False
In [6]: hash(t1)
Out[6]: 5756291921003024619
In [7]: hash(t2)
Out[7]: 5756291921003024619
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/13285 | 2016-05-25T17:11:32Z | 2016-05-26T16:13:01Z | null | 2016-05-26T16:13:01Z |
Remove imp and just use importlib to avoid memory error when importin… | diff --git a/pandas/util/print_versions.py b/pandas/util/print_versions.py
index 115423f3e3e22..e74568f39418c 100644
--- a/pandas/util/print_versions.py
+++ b/pandas/util/print_versions.py
@@ -4,6 +4,7 @@
import struct
import subprocess
import codecs
+import importlib
def get_sys_info():
@@ -55,7 +56,6 @@ def get_sys_info():
def show_versions(as_json=False):
- import imp
sys_info = get_sys_info()
deps = [
@@ -99,11 +99,7 @@ def show_versions(as_json=False):
deps_blob = list()
for (modname, ver_f) in deps:
try:
- try:
- mod = imp.load_module(modname, *imp.find_module(modname))
- except (ImportError):
- import importlib
- mod = importlib.import_module(modname)
+ mod = importlib.import_module(modname)
ver = ver_f(mod)
deps_blob.append((modname, ver))
except:
| - [ ] closes #13282
Remove imp from pandas.show_versions() to fix memory problem.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13284 | 2016-05-25T13:46:46Z | 2016-05-25T17:35:10Z | null | 2016-05-26T18:44:02Z |
Change get_dummies() to return columns of dtype=bool instead of np.float64 | diff --git a/pandas/core/reshape.py b/pandas/core/reshape.py
index 8d237016d1b33..b217b3f768865 100644
--- a/pandas/core/reshape.py
+++ b/pandas/core/reshape.py
@@ -1159,14 +1159,14 @@ def get_empty_Frame(data, sparse):
sp_indices = sp_indices[1:]
dummy_cols = dummy_cols[1:]
for col, ixs in zip(dummy_cols, sp_indices):
- sarr = SparseArray(np.ones(len(ixs)),
+ sarr = SparseArray(np.ones(len(ixs), dtype=bool),
sparse_index=IntIndex(N, ixs), fill_value=0)
- sparse_series[col] = SparseSeries(data=sarr, index=index)
+ sparse_series[col] = SparseSeries(data=sarr, index=index, dtype=bool)
- return SparseDataFrame(sparse_series, index=index, columns=dummy_cols)
+ return SparseDataFrame(sparse_series, index=index, columns=dummy_cols, dtype=bool)
else:
- dummy_mat = np.eye(number_of_cols).take(codes, axis=0)
+ dummy_mat = np.eye(number_of_cols, dtype=bool).take(codes, axis=0)
if not dummy_na:
# reset NaN GH4446
@@ -1176,7 +1176,7 @@ def get_empty_Frame(data, sparse):
# remove first GH12042
dummy_mat = dummy_mat[:, 1:]
dummy_cols = dummy_cols[1:]
- return DataFrame(dummy_mat, index=index, columns=dummy_cols)
+ return DataFrame(dummy_mat, index=index, columns=dummy_cols, dtype=bool)
def make_axis_dummies(frame, axis='minor', transform=None):
| - closes #8725
| https://api.github.com/repos/pandas-dev/pandas/pulls/13283 | 2016-05-25T11:04:47Z | 2016-05-31T15:30:06Z | null | 2016-05-31T15:30:06Z |
DOC: Added an example of pitfalls when using astype | diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index e3b0915cd571d..917d2f2bb8b04 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -1726,6 +1726,28 @@ then the more *general* one will be used as the result of the operation.
# conversion of dtypes
df3.astype('float32').dtypes
+Convert a subset of columns to a specified type using :meth:`~DataFrame.astype`
+
+.. ipython:: python
+
+ dft = pd.DataFrame({'a': [1,2,3], 'b': [4,5,6], 'c': [7, 8, 9]})
+ dft[['a','b']] = dft[['a','b']].astype(np.uint8)
+ dft
+ dft.dtypes
+
+.. note::
+
+ When trying to convert a subset of columns to a specified type using :meth:`~DataFrame.astype` and :meth:`~DataFrame.loc`, upcasting occurs.
+
+ :meth:`~DataFrame.loc` tries to fit in what we are assigning to the current dtypes, while ``[]`` will overwrite them taking the dtype from the right hand side. Therefore the following piece of code produces the unintended result.
+
+ .. ipython:: python
+
+ dft = pd.DataFrame({'a': [1,2,3], 'b': [4,5,6], 'c': [7, 8, 9]})
+ dft.loc[:, ['a', 'b']].astype(np.uint8).dtypes
+ dft.loc[:, ['a', 'b']] = dft.loc[:, ['a', 'b']].astype(np.uint8)
+ dft.dtypes
+
object conversion
~~~~~~~~~~~~~~~~~
| - [x] closes #13260
- [ ] tests added / passed
- [ ] passes `git diff upstream/master | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/13278 | 2016-05-25T04:42:56Z | 2016-05-26T18:15:44Z | null | 2016-05-26T18:15:56Z |
clean up PeriodIndex constructor | diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index dca4f890e496b..1f3ddb8923bac 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -585,6 +585,7 @@ Bug Fixes
- Bug in ``DataFrame.sort_values()`` when sorting by multiple columns where one column is of type ``int64`` and contains ``NaT`` (:issue:`14922`)
- Bug in ``DataFrame.reindex()`` in which ``method`` was ignored when passing ``columns`` (:issue:`14992`)
- Bug in ``pd.to_numeric()`` in which float and unsigned integer elements were being improperly casted (:issue:`14941`, :issue:`15005`)
+- Cleaned up ``PeriodIndex`` constructor, including raising on floats more consistently (:issue:`13277`)
- Bug in ``pd.read_csv()`` in which the ``dialect`` parameter was not being verified before processing (:issue:`14898`)
- Bug in ``pd.read_fwf`` where the skiprows parameter was not being respected during column width inference (:issue:`11256`)
- Bug in ``pd.read_csv()`` in which missing data was being improperly handled with ``usecols`` (:issue:`6710`)
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 55d404f05dd1d..d37c98c9b9b90 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -471,8 +471,8 @@ def _value_counts_arraylike(values, dropna=True):
# dtype handling
if is_datetimetz_type:
keys = DatetimeIndex._simple_new(keys, tz=orig.dtype.tz)
- if is_period_type:
- keys = PeriodIndex._simple_new(keys, freq=freq)
+ elif is_period_type:
+ keys = PeriodIndex._from_ordinals(keys, freq=freq)
elif is_signed_integer_dtype(dtype):
values = _ensure_int64(values)
diff --git a/pandas/indexes/base.py b/pandas/indexes/base.py
index 5d43d2d32af67..e441d9a88690d 100644
--- a/pandas/indexes/base.py
+++ b/pandas/indexes/base.py
@@ -88,6 +88,11 @@ def _new_Index(cls, d):
""" This is called upon unpickling, rather than the default which doesn't
have arguments and breaks __new__
"""
+ # required for backward compat, because PI can't be instantiated with
+ # ordinals through __new__ GH #13277
+ if issubclass(cls, ABCPeriodIndex):
+ from pandas.tseries.period import _new_PeriodIndex
+ return _new_PeriodIndex(cls, **d)
return cls.__new__(cls, **d)
diff --git a/pandas/io/packers.py b/pandas/io/packers.py
index 7afe8a06b6af1..39bc1a4ecf225 100644
--- a/pandas/io/packers.py
+++ b/pandas/io/packers.py
@@ -573,7 +573,7 @@ def decode(obj):
elif typ == u'period_index':
data = unconvert(obj[u'data'], np.int64, obj.get(u'compress'))
d = dict(name=obj[u'name'], freq=obj[u'freq'])
- return globals()[obj[u'klass']](data, **d)
+ return globals()[obj[u'klass']]._from_ordinals(data, **d)
elif typ == u'datetime_index':
data = unconvert(obj[u'data'], np.int64, obj.get(u'compress'))
d = dict(name=obj[u'name'], freq=obj[u'freq'], verify_integrity=False)
diff --git a/pandas/tests/indexes/period/test_construction.py b/pandas/tests/indexes/period/test_construction.py
index 228615829b5b8..f13a84f4f0e92 100644
--- a/pandas/tests/indexes/period/test_construction.py
+++ b/pandas/tests/indexes/period/test_construction.py
@@ -120,7 +120,7 @@ def test_constructor_fromarraylike(self):
self.assertRaises(ValueError, PeriodIndex, idx._values)
self.assertRaises(ValueError, PeriodIndex, list(idx._values))
- self.assertRaises(ValueError, PeriodIndex,
+ self.assertRaises(TypeError, PeriodIndex,
data=Period('2007', freq='A'))
result = PeriodIndex(iter(idx))
@@ -285,12 +285,15 @@ def test_constructor_simple_new_empty(self):
result = idx._simple_new(idx, name='p', freq='M')
tm.assert_index_equal(result, idx)
- def test_constructor_simple_new_floats(self):
+ def test_constructor_floats(self):
# GH13079
- for floats in [[1.1], np.array([1.1])]:
+ for floats in [[1.1, 2.1], np.array([1.1, 2.1])]:
with self.assertRaises(TypeError):
pd.PeriodIndex._simple_new(floats, freq='M')
+ with self.assertRaises(TypeError):
+ pd.PeriodIndex(floats, freq='M')
+
def test_constructor_nat(self):
self.assertRaises(ValueError, period_range, start='NaT',
end='2011-01-01', freq='M')
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index 8a6b0c153bb50..ca61394fc0423 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -17,7 +17,6 @@
is_period_dtype,
is_bool_dtype,
pandas_dtype,
- _ensure_int64,
_ensure_object)
from pandas.types.dtypes import PeriodDtype
from pandas.types.generic import ABCSeries
@@ -114,6 +113,13 @@ def wrapper(self, other):
return wrapper
+def _new_PeriodIndex(cls, **d):
+ # GH13277 for unpickling
+ if d['data'].dtype == 'int64':
+ values = d.pop('data')
+ return cls._from_ordinals(values=values, **d)
+
+
class PeriodIndex(DatelikeOps, DatetimeIndexOpsMixin, Int64Index):
"""
Immutable ndarray holding ordinal values indicating regular periods in
@@ -209,17 +215,56 @@ def __new__(cls, data=None, ordinal=None, freq=None, start=None, end=None,
msg = 'specified freq and dtype are different'
raise IncompatibleFrequency(msg)
+ # coerce freq to freq object, otherwise it can be coerced elementwise
+ # which is slow
+ if freq:
+ freq = Period._maybe_convert_freq(freq)
+
if data is None:
if ordinal is not None:
data = np.asarray(ordinal, dtype=np.int64)
else:
data, freq = cls._generate_range(start, end, periods,
freq, kwargs)
- else:
- ordinal, freq = cls._from_arraylike(data, freq, tz)
- data = np.array(ordinal, dtype=np.int64, copy=copy)
+ return cls._from_ordinals(data, name=name, freq=freq)
- return cls._simple_new(data, name=name, freq=freq)
+ if isinstance(data, PeriodIndex):
+ if freq is None or freq == data.freq: # no freq change
+ freq = data.freq
+ data = data._values
+ else:
+ base1, _ = _gfc(data.freq)
+ base2, _ = _gfc(freq)
+ data = period.period_asfreq_arr(data._values,
+ base1, base2, 1)
+ return cls._simple_new(data, name=name, freq=freq)
+
+ # not array / index
+ if not isinstance(data, (np.ndarray, PeriodIndex,
+ DatetimeIndex, Int64Index)):
+ if is_scalar(data) or isinstance(data, Period):
+ cls._scalar_data_error(data)
+
+ # other iterable of some kind
+ if not isinstance(data, (list, tuple)):
+ data = list(data)
+
+ data = np.asarray(data)
+
+ # datetime other than period
+ if is_datetime64_dtype(data.dtype):
+ data = dt64arr_to_periodarr(data, freq, tz)
+ return cls._from_ordinals(data, name=name, freq=freq)
+
+ # check not floats
+ if infer_dtype(data) == 'floating' and len(data) > 0:
+ raise TypeError("PeriodIndex can't take floats")
+
+ # anything else, likely an array of strings or periods
+ data = _ensure_object(data)
+ freq = freq or period.extract_freq(data)
+ data = period.extract_ordinals(data, freq)
+ return cls._from_ordinals(data, name=name, freq=freq)
@classmethod
def _generate_range(cls, start, end, periods, freq, fields):
@@ -240,77 +285,26 @@ def _generate_range(cls, start, end, periods, freq, fields):
return subarr, freq
- @classmethod
- def _from_arraylike(cls, data, freq, tz):
- if freq is not None:
- freq = Period._maybe_convert_freq(freq)
-
- if not isinstance(data, (np.ndarray, PeriodIndex,
- DatetimeIndex, Int64Index)):
- if is_scalar(data) or isinstance(data, Period):
- raise ValueError('PeriodIndex() must be called with a '
- 'collection of some kind, %s was passed'
- % repr(data))
-
- # other iterable of some kind
- if not isinstance(data, (list, tuple)):
- data = list(data)
-
- try:
- data = _ensure_int64(data)
- if freq is None:
- raise ValueError('freq not specified')
- data = np.array([Period(x, freq=freq) for x in data],
- dtype=np.int64)
- except (TypeError, ValueError):
- data = _ensure_object(data)
-
- if freq is None:
- freq = period.extract_freq(data)
- data = period.extract_ordinals(data, freq)
- else:
- if isinstance(data, PeriodIndex):
- if freq is None or freq == data.freq:
- freq = data.freq
- data = data._values
- else:
- base1, _ = _gfc(data.freq)
- base2, _ = _gfc(freq)
- data = period.period_asfreq_arr(data._values,
- base1, base2, 1)
- else:
- if is_object_dtype(data):
- inferred = infer_dtype(data)
- if inferred == 'integer':
- data = data.astype(np.int64)
-
- if freq is None and is_object_dtype(data):
- # must contain Period instance and thus extract ordinals
- freq = period.extract_freq(data)
- data = period.extract_ordinals(data, freq)
-
- if freq is None:
- msg = 'freq not specified and cannot be inferred'
- raise ValueError(msg)
-
- if data.dtype != np.int64:
- if np.issubdtype(data.dtype, np.datetime64):
- data = dt64arr_to_periodarr(data, freq, tz)
- else:
- data = _ensure_object(data)
- data = period.extract_ordinals(data, freq)
-
- return data, freq
-
@classmethod
def _simple_new(cls, values, name=None, freq=None, **kwargs):
-
+ """
+ Values can be any type that can be coerced to Periods.
+ Ordinals in an ndarray are fastpath-ed to `_from_ordinals`
+ """
if not is_integer_dtype(values):
values = np.array(values, copy=False)
- if (len(values) > 0 and is_float_dtype(values)):
+ if len(values) > 0 and is_float_dtype(values):
raise TypeError("PeriodIndex can't take floats")
- else:
- return cls(values, name=name, freq=freq, **kwargs)
+ return cls(values, name=name, freq=freq, **kwargs)
+
+ return cls._from_ordinals(values, name, freq, **kwargs)
+
+ @classmethod
+ def _from_ordinals(cls, values, name=None, freq=None, **kwargs):
+ """
+ Values should be int ordinals
+ `__new__` & `_simple_new` cooerce to ordinals and call this method
+ """
values = np.array(values, dtype='int64', copy=False)
@@ -318,7 +312,7 @@ def _simple_new(cls, values, name=None, freq=None, **kwargs):
result._data = values
result.name = name
if freq is None:
- raise ValueError('freq is not specified')
+ raise ValueError('freq is not specified and cannot be inferred')
result.freq = Period._maybe_convert_freq(freq)
result._reset_identity()
return result
@@ -327,13 +321,13 @@ def _shallow_copy_with_infer(self, values=None, **kwargs):
""" we always want to return a PeriodIndex """
return self._shallow_copy(values=values, **kwargs)
- def _shallow_copy(self, values=None, **kwargs):
- if kwargs.get('freq') is None:
- # freq must be provided
- kwargs['freq'] = self.freq
+ def _shallow_copy(self, values=None, freq=None, **kwargs):
+ if freq is None:
+ freq = self.freq
if values is None:
values = self._values
- return super(PeriodIndex, self)._shallow_copy(values=values, **kwargs)
+ return super(PeriodIndex, self)._shallow_copy(values=values,
+ freq=freq, **kwargs)
def _coerce_scalar_to_index(self, item):
"""
@@ -413,7 +407,7 @@ def __array_wrap__(self, result, context=None):
return result
# the result is object dtype array of Period
# cannot pass _simple_new as it is
- return PeriodIndex(result, freq=self.freq, name=self.name)
+ return self._shallow_copy(result, freq=self.freq, name=self.name)
@property
def _box_func(self):
@@ -708,7 +702,7 @@ def shift(self, n):
values = self._values + n * self.freq.n
if self.hasnans:
values[self._isnan] = tslib.iNaT
- return PeriodIndex(data=values, name=self.name, freq=self.freq)
+ return self._shallow_copy(values=values)
@cache_readonly
def dtype(self):
@@ -945,7 +939,8 @@ def _wrap_union_result(self, other, result):
def _apply_meta(self, rawarr):
if not isinstance(rawarr, PeriodIndex):
- rawarr = PeriodIndex(rawarr, freq=self.freq)
+ rawarr = PeriodIndex._from_ordinals(rawarr, freq=self.freq,
+ name=self.name)
return rawarr
def _format_native_types(self, na_rep=u('NaT'), date_format=None,
diff --git a/setup.cfg b/setup.cfg
index b9de7a3532209..8de4fc955bd50 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -13,6 +13,7 @@ parentdir_prefix = pandas-
[flake8]
ignore = E731,E402
+max-line-length = 79
[yapf]
based_on_style = pep8
| - [x] closes #13232
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
Material clean up of PeriodIndex constructor, which was doing a few weird things (https://github.com/pydata/pandas/issues/13232#issuecomment-220788816), and generally getting messy.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13277 | 2016-05-25T04:08:22Z | 2017-03-04T21:15:37Z | null | 2017-03-04T21:16:03Z |
DOC: Added additional example for groupby by indexer. | diff --git a/doc/source/groupby.rst b/doc/source/groupby.rst
index 4cde1fed344a8..484efd12c5d78 100644
--- a/doc/source/groupby.rst
+++ b/doc/source/groupby.rst
@@ -1014,6 +1014,23 @@ Regroup columns of a DataFrame according to their sum, and sum the aggregated on
df
df.groupby(df.sum(), axis=1).sum()
+Groupby by Indexer to 'resample' data
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Resampling produces new hypothetical samples(resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
+
+In order to resample to work on indices that are non-datetimelike , the following procedure can be utilized.
+
+In the following examples, **df.index // 5** returns a binary array which is used to determine what get's selected for the groupby operation.
+
+.. note:: The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using **df.index // 5**, we are aggregating the samples in bins. By applying **std()** function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
+
+.. ipython:: python
+
+ df = pd.DataFrame(np.random.randn(10,2))
+ df
+ df.index // 5
+ df.groupby(df.index // 5).std()
Returning a Series to propagate names
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| - [x] closes #13271
- [ ] tests added / passed
- [ ] passes `git diff upstream/master | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/13276 | 2016-05-25T04:01:05Z | 2016-06-28T22:18:03Z | 2016-06-28T22:18:02Z | 2016-06-28T22:23:40Z |
BUG: Properly validate and parse nrows in read_csv | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index ee2761b79b620..c9d267c05d370 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -252,3 +252,4 @@ Bug Fixes
- Bug in ``groupby`` where ``apply`` returns different result depending on whether first result is ``None`` or not (:issue:`12824`)
- Bug in ``Categorical.remove_unused_categories()`` changes ``.codes`` dtype to platform int (:issue:`13261`)
+- Bug in ``pd.read_csv`` in which the ``nrows`` argument was not properly validated for both engines (:issue:`10476`)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index c939864d7a38b..95a7f63075167 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -272,6 +272,26 @@
""" % (_parser_params % (_fwf_widths, ''))
+def _validate_nrows(nrows):
+ """
+ Checks whether the 'nrows' parameter for parsing is either
+ an integer OR float that can SAFELY be cast to an integer
+ without losing accuracy. Raises a ValueError if that is
+ not the case.
+ """
+ msg = "'nrows' must be an integer"
+
+ if nrows is not None:
+ if com.is_float(nrows):
+ if int(nrows) != nrows:
+ raise ValueError(msg)
+ nrows = int(nrows)
+ elif not com.is_integer(nrows):
+ raise ValueError(msg)
+
+ return nrows
+
+
def _read(filepath_or_buffer, kwds):
"Generic reader of line files."
encoding = kwds.get('encoding', None)
@@ -311,14 +331,14 @@ def _read(filepath_or_buffer, kwds):
# Extract some of the arguments (pass chunksize on).
iterator = kwds.get('iterator', False)
- nrows = kwds.pop('nrows', None)
chunksize = kwds.get('chunksize', None)
+ nrows = _validate_nrows(kwds.pop('nrows', None))
# Create the parser.
parser = TextFileReader(filepath_or_buffer, **kwds)
if (nrows is not None) and (chunksize is not None):
- raise NotImplementedError("'nrows' and 'chunksize' can not be used"
+ raise NotImplementedError("'nrows' and 'chunksize' cannot be used"
" together yet.")
elif nrows is not None:
return parser.read(nrows)
diff --git a/pandas/io/tests/parser/common.py b/pandas/io/tests/parser/common.py
index 90a0b420eed3c..8c4bf3644127e 100644
--- a/pandas/io/tests/parser/common.py
+++ b/pandas/io/tests/parser/common.py
@@ -391,10 +391,23 @@ def test_int_conversion(self):
self.assertEqual(data['B'].dtype, np.int64)
def test_read_nrows(self):
- df = self.read_csv(StringIO(self.data1), nrows=3)
expected = self.read_csv(StringIO(self.data1))[:3]
+
+ df = self.read_csv(StringIO(self.data1), nrows=3)
tm.assert_frame_equal(df, expected)
+ # see gh-10476
+ df = self.read_csv(StringIO(self.data1), nrows=3.0)
+ tm.assert_frame_equal(df, expected)
+
+ msg = "must be an integer"
+
+ with tm.assertRaisesRegexp(ValueError, msg):
+ self.read_csv(StringIO(self.data1), nrows=1.2)
+
+ with tm.assertRaisesRegexp(ValueError, msg):
+ self.read_csv(StringIO(self.data1), nrows='foo')
+
def test_read_chunksize(self):
reader = self.read_csv(StringIO(self.data1), index_col=0, chunksize=2)
df = self.read_csv(StringIO(self.data1), index_col=0)
@@ -815,11 +828,6 @@ def test_ignore_leading_whitespace(self):
expected = DataFrame({'a': [1, 4, 7], 'b': [2, 5, 8], 'c': [3, 6, 9]})
tm.assert_frame_equal(result, expected)
- def test_nrows_and_chunksize_raises_notimplemented(self):
- data = 'a b c'
- self.assertRaises(NotImplementedError, self.read_csv, StringIO(data),
- nrows=10, chunksize=5)
-
def test_chunk_begins_with_newline_whitespace(self):
# see gh-10022
data = '\n hello\nworld\n'
diff --git a/pandas/io/tests/parser/test_unsupported.py b/pandas/io/tests/parser/test_unsupported.py
index cefe7d939d1ab..3c1c45831e7b4 100644
--- a/pandas/io/tests/parser/test_unsupported.py
+++ b/pandas/io/tests/parser/test_unsupported.py
@@ -30,6 +30,15 @@ def test_mangle_dupe_cols_false(self):
read_csv(StringIO(data), engine=engine,
mangle_dupe_cols=False)
+ def test_nrows_and_chunksize(self):
+ data = 'a b c'
+ msg = "cannot be used together yet"
+
+ for engine in ('c', 'python'):
+ with tm.assertRaisesRegexp(NotImplementedError, msg):
+ read_csv(StringIO(data), engine=engine,
+ nrows=10, chunksize=5)
+
def test_c_engine(self):
# see gh-6607
data = 'a b c\n1 2 3'
| 1) Allows `float` values for `nrows` for the Python engine
2) Prevents abuse of the `nrows` argument for the CParser (e.g. you can passing `nrows=1.2`)
Closes #10476.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13275 | 2016-05-25T02:20:41Z | 2016-05-25T12:09:25Z | null | 2016-05-25T12:30:44Z |
BUG, ENH: Improve infinity parsing for read_csv | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index ebae54f292e3c..9d53394ce70c9 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -78,6 +78,7 @@ Other enhancements
- ``Index.astype()`` now accepts an optional boolean argument ``copy``, which allows optional copying if the requirements on dtype are satisfied (:issue:`13209`)
- ``Categorical.astype()`` now accepts an optional boolean argument ``copy``, effective when dtype is categorical (:issue:`13209`)
+- Consistent with the Python API, ``pd.read_csv`` will now interpret ``+inf`` as positive infinity (:issue:`13274`)
.. _whatsnew_0182.api:
@@ -257,3 +258,4 @@ Bug Fixes
- Bug in ``Categorical.remove_unused_categories()`` changes ``.codes`` dtype to platform int (:issue:`13261`)
+- Bug in ``pd.read_csv`` for the Python engine in which infinities of mixed-case forms were not being interpreted properly (:issue:`13274`)
diff --git a/pandas/io/tests/parser/c_parser_only.py b/pandas/io/tests/parser/c_parser_only.py
index 325418f87af6a..aeee77bb02e98 100644
--- a/pandas/io/tests/parser/c_parser_only.py
+++ b/pandas/io/tests/parser/c_parser_only.py
@@ -447,25 +447,3 @@ def test_empty_header_read(count):
for count in range(1, 101):
test_empty_header_read(count)
-
- def test_inf_parsing(self):
- data = """\
-,A
-a,inf
-b,-inf
-c,Inf
-d,-Inf
-e,INF
-f,-INF
-g,INf
-h,-INf
-i,inF
-j,-inF"""
- inf = float('inf')
- expected = Series([inf, -inf] * 5)
-
- df = self.read_csv(StringIO(data), index_col=0)
- tm.assert_almost_equal(df['A'].values, expected.values)
-
- df = self.read_csv(StringIO(data), index_col=0, na_filter=False)
- tm.assert_almost_equal(df['A'].values, expected.values)
diff --git a/pandas/io/tests/parser/common.py b/pandas/io/tests/parser/common.py
index 8c4bf3644127e..3912bbbf11e53 100644
--- a/pandas/io/tests/parser/common.py
+++ b/pandas/io/tests/parser/common.py
@@ -1300,3 +1300,27 @@ def test_read_duplicate_names(self):
expected = DataFrame([[0, 1, 2], [3, 4, 5]],
columns=['a', 'b', 'a.1'])
tm.assert_frame_equal(df, expected)
+
+ def test_inf_parsing(self):
+ data = """\
+,A
+a,inf
+b,-inf
+c,+Inf
+d,-Inf
+e,INF
+f,-INF
+g,+INf
+h,-INf
+i,inF
+j,-inF"""
+ inf = float('inf')
+ expected = Series([inf, -inf] * 5)
+
+ df = self.read_csv(StringIO(data), index_col=0)
+ tm.assert_almost_equal(df['A'].values, expected.values)
+
+ if self.engine == 'c':
+ # TODO: remove condition when 'na_filter' is supported for Python
+ df = self.read_csv(StringIO(data), index_col=0, na_filter=False)
+ tm.assert_almost_equal(df['A'].values, expected.values)
diff --git a/pandas/parser.pyx b/pandas/parser.pyx
index 94d7f36f4f205..729e5af528b80 100644
--- a/pandas/parser.pyx
+++ b/pandas/parser.pyx
@@ -1501,6 +1501,7 @@ cdef inline void _to_fw_string_nogil(parser_t *parser, int col, int line_start,
data += width
cdef char* cinf = b'inf'
+cdef char* cposinf = b'+inf'
cdef char* cneginf = b'-inf'
cdef _try_double(parser_t *parser, int col, int line_start, int line_end,
@@ -1562,7 +1563,7 @@ cdef inline int _try_double_nogil(parser_t *parser, int col, int line_start, int
data[0] = parser.converter(word, &p_end, parser.decimal, parser.sci,
parser.thousands, 1)
if errno != 0 or p_end[0] or p_end == word:
- if strcasecmp(word, cinf) == 0:
+ if strcasecmp(word, cinf) == 0 or strcasecmp(word, cposinf) == 0:
data[0] = INF
elif strcasecmp(word, cneginf) == 0:
data[0] = NEGINF
@@ -1581,7 +1582,7 @@ cdef inline int _try_double_nogil(parser_t *parser, int col, int line_start, int
data[0] = parser.converter(word, &p_end, parser.decimal, parser.sci,
parser.thousands, 1)
if errno != 0 or p_end[0] or p_end == word:
- if strcasecmp(word, cinf) == 0:
+ if strcasecmp(word, cinf) == 0 or strcasecmp(word, cposinf) == 0:
data[0] = INF
elif strcasecmp(word, cneginf) == 0:
data[0] = NEGINF
diff --git a/pandas/src/parse_helper.h b/pandas/src/parse_helper.h
index d47e448700029..fd5089dd8963d 100644
--- a/pandas/src/parse_helper.h
+++ b/pandas/src/parse_helper.h
@@ -1,5 +1,6 @@
#include <errno.h>
#include <float.h>
+#include "headers/portable.h"
static double xstrtod(const char *p, char **q, char decimal, char sci,
int skip_trailing, int *maybe_int);
@@ -39,22 +40,36 @@ int floatify(PyObject* str, double *result, int *maybe_int) {
if (!status) {
/* handle inf/-inf */
- if (0 == strcmp(data, "-inf")) {
- *result = -HUGE_VAL;
- *maybe_int = 0;
- } else if (0 == strcmp(data, "inf")) {
- *result = HUGE_VAL;
- *maybe_int = 0;
+ if (strlen(data) == 3) {
+ if (0 == strcasecmp(data, "inf")) {
+ *result = HUGE_VAL;
+ *maybe_int = 0;
+ } else {
+ goto parsingerror;
+ }
+ } else if (strlen(data) == 4) {
+ if (0 == strcasecmp(data, "-inf")) {
+ *result = -HUGE_VAL;
+ *maybe_int = 0;
+ } else if (0 == strcasecmp(data, "+inf")) {
+ *result = HUGE_VAL;
+ *maybe_int = 0;
+ } else {
+ goto parsingerror;
+ }
} else {
- PyErr_SetString(PyExc_ValueError, "Unable to parse string");
- Py_XDECREF(tmp);
- return -1;
+ goto parsingerror;
}
}
Py_XDECREF(tmp);
return 0;
+parsingerror:
+ PyErr_SetString(PyExc_ValueError, "Unable to parse string");
+ Py_XDECREF(tmp);
+ return -1;
+
/*
#if PY_VERSION_HEX >= 0x03000000
return PyFloat_FromString(str);
diff --git a/pandas/tests/test_lib.py b/pandas/tests/test_lib.py
index 6912e3a7ff68c..2aa31063df446 100644
--- a/pandas/tests/test_lib.py
+++ b/pandas/tests/test_lib.py
@@ -188,6 +188,45 @@ def test_isinf_scalar(self):
self.assertFalse(lib.isneginf_scalar(1))
self.assertFalse(lib.isneginf_scalar('a'))
+ def test_maybe_convert_numeric_infinities(self):
+ # see gh-13274
+ infinities = ['inf', 'inF', 'iNf', 'Inf',
+ 'iNF', 'InF', 'INf', 'INF']
+ na_values = set(['', 'NULL', 'nan'])
+
+ pos = np.array(['inf'], dtype=np.float64)
+ neg = np.array(['-inf'], dtype=np.float64)
+
+ msg = "Unable to parse string"
+
+ for infinity in infinities:
+ for maybe_int in (True, False):
+ out = lib.maybe_convert_numeric(
+ np.array([infinity], dtype=object),
+ na_values, maybe_int)
+ tm.assert_numpy_array_equal(out, pos)
+
+ out = lib.maybe_convert_numeric(
+ np.array(['-' + infinity], dtype=object),
+ na_values, maybe_int)
+ tm.assert_numpy_array_equal(out, neg)
+
+ out = lib.maybe_convert_numeric(
+ np.array([u(infinity)], dtype=object),
+ na_values, maybe_int)
+ tm.assert_numpy_array_equal(out, pos)
+
+ out = lib.maybe_convert_numeric(
+ np.array(['+' + infinity], dtype=object),
+ na_values, maybe_int)
+ tm.assert_numpy_array_equal(out, pos)
+
+ # too many characters
+ with tm.assertRaisesRegexp(ValueError, msg):
+ lib.maybe_convert_numeric(
+ np.array(['foo_' + infinity], dtype=object),
+ na_values, maybe_int)
+
class Testisscalar(tm.TestCase):
| 1) Allow mixed-case infinity strings for the Python engine
Bug was traced back via `lib.maybe_convert_numeric` to the `floatify` function in `pandas/src/parse_helper.h`. In addition to correcting the bug and adding tests for it, this PR also moves the `test_inf_parsing` test from `c_parser_only.py` to `common.py` in the `pandas/io/tests/parser` dir.
2) Interpret `+inf` as positive infinity for both engines
`float('+inf')` in Python is interpreted as positive infinity, so we should allow it too in parsing.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13274 | 2016-05-24T23:24:22Z | 2016-05-25T17:32:03Z | null | 2016-05-25T19:26:55Z |
improves usability of style calls on axis=1 by automatically creating… | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 004e2dcc20084..f577f77d256a4 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -259,3 +259,5 @@ Bug Fixes
- Bug in ``Categorical.remove_unused_categories()`` changes ``.codes`` dtype to platform int (:issue:`13261`)
+
+- Bug in Style calls specifying axis=1 and a subset of id's (:issue:`13273`)
\ No newline at end of file
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 9485f50ed07f1..600fa2710d6d7 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1925,19 +1925,24 @@ def maybe_droplevels(index, key):
return index
-def _non_reducing_slice(slice_):
+def _non_reducing_slice(slice_, axis=0):
"""
- Ensurse that a slice doesn't reduce to a Series or Scalar.
+ Ensure that a slice doesn't reduce to a Series or Scalar.
- Any user-paseed `subset` should have this called on it
+ Any user-passed `subset` should have this called on it
to make sure we're always working with DataFrames.
+
+ axis will determine the specification of the IndexSlice
"""
# default to column slice, like DataFrame
# ['A', 'B'] -> IndexSlices[:, ['A', 'B']]
kinds = tuple(list(compat.string_types) + [ABCSeries, np.ndarray, Index,
list])
if isinstance(slice_, kinds):
- slice_ = IndexSlice[:, slice_]
+ if axis == 0:
+ slice_ = IndexSlice[:, slice_]
+ elif axis == 1:
+ slice_ = IndexSlice[slice_, :]
def pred(part):
# true when slice does *not* reduce
diff --git a/pandas/formats/style.py b/pandas/formats/style.py
index f66ac7485c76e..ad121c1c3149b 100644
--- a/pandas/formats/style.py
+++ b/pandas/formats/style.py
@@ -426,7 +426,7 @@ def _compute(self):
def _apply(self, func, axis=0, subset=None, **kwargs):
subset = slice(None) if subset is None else subset
- subset = _non_reducing_slice(subset)
+ subset = _non_reducing_slice(subset, axis)
if axis is not None:
result = self.data.loc[subset].apply(func, axis=axis, **kwargs)
else:
@@ -701,7 +701,7 @@ def background_gradient(self, cmap='PuBu', low=0, high=0, axis=0,
and ``high * (x.max() - x.min())`` before normalizing.
"""
subset = _maybe_numeric_slice(self.data, subset)
- subset = _non_reducing_slice(subset)
+ subset = _non_reducing_slice(subset, axis)
self.apply(self._background_gradient, cmap=cmap, subset=subset,
axis=axis, low=low, high=high)
return self
@@ -779,7 +779,7 @@ def bar(self, subset=None, axis=0, color='#d65f5f', width=100):
self : Styler
"""
subset = _maybe_numeric_slice(self.data, subset)
- subset = _non_reducing_slice(subset)
+ subset = _non_reducing_slice(subset, axis)
self.apply(self._bar, subset=subset, axis=axis, color=color,
width=width)
return self
diff --git a/pandas/tests/formats/test_style.py b/pandas/tests/formats/test_style.py
index 5a79e3f6897f0..d444ba40cfece 100644
--- a/pandas/tests/formats/test_style.py
+++ b/pandas/tests/formats/test_style.py
@@ -555,3 +555,12 @@ def test_background_gradient(self):
result = (df.style.background_gradient(subset=pd.IndexSlice[1, 'A'])
._compute().ctx)
self.assertEqual(result[(1, 0)], ['background-color: #fff7fb'])
+
+ grad = df.style.background_gradient
+ self.assertEqual(
+ grad(subset=pd.IndexSlice[:, 'A':'B'], axis=0)._compute().ctx,
+ grad(subset=['A', 'B'], axis=0)._compute().ctx)
+
+ self.assertEqual(
+ grad(subset=pd.IndexSlice[0:1, ], axis=1)._compute().ctx,
+ grad(subset=[0, 1], axis=1)._compute().ctx)
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master | flake8 --diff`
- [ ] whatsnew entry
… the IndexSlice same as it does for axis=1
```
df = pd.DataFrame(index = ['x', 'y', 'z'])
df['a'] = [1,2,3]
df['b'] = [4,5,6]
df['c'] = [7,8,9]
df
df.style.background_gradient(subset='x', axis=1)
```
# otherwise this throws some vague exception. seems like if you pass in
axis this should just work
`df.style.background_gradient(subset=pd.IndexSlice['x',], axis=1)` #same
as this
| https://api.github.com/repos/pandas-dev/pandas/pulls/13273 | 2016-05-24T20:14:54Z | 2017-03-28T00:00:26Z | null | 2017-03-28T00:00:26Z |
ENH: support decimal argument in read_html #12907 | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index ee2761b79b620..1438eb29eff40 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -79,6 +79,7 @@ Other enhancements
- ``Index.astype()`` now accepts an optional boolean argument ``copy``, which allows optional copying if the requirements on dtype are satisfied (:issue:`13209`)
- ``Categorical.astype()`` now accepts an optional boolean argument ``copy``, effective when dtype is categorical (:issue:`13209`)
+- ``pd.read_html()`` has gained support for the ``decimal`` option (:issue:`12907`)
.. _whatsnew_0182.api:
diff --git a/pandas/io/html.py b/pandas/io/html.py
index e350a40bfa805..48caaa39dd711 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -612,7 +612,8 @@ def _expand_elements(body):
def _data_to_frame(data, header, index_col, skiprows,
- parse_dates, tupleize_cols, thousands):
+ parse_dates, tupleize_cols, thousands,
+ decimal):
head, body, foot = data
if head:
@@ -630,7 +631,7 @@ def _data_to_frame(data, header, index_col, skiprows,
tp = TextParser(body, header=header, index_col=index_col,
skiprows=_get_skiprows(skiprows),
parse_dates=parse_dates, tupleize_cols=tupleize_cols,
- thousands=thousands)
+ thousands=thousands, decimal=decimal)
df = tp.read()
return df
@@ -716,7 +717,8 @@ def _validate_flavor(flavor):
def _parse(flavor, io, match, header, index_col, skiprows,
- parse_dates, tupleize_cols, thousands, attrs, encoding):
+ parse_dates, tupleize_cols, thousands, attrs, encoding,
+ decimal):
flavor = _validate_flavor(flavor)
compiled_match = re.compile(match) # you can pass a compiled regex here
@@ -744,7 +746,9 @@ def _parse(flavor, io, match, header, index_col, skiprows,
skiprows=skiprows,
parse_dates=parse_dates,
tupleize_cols=tupleize_cols,
- thousands=thousands))
+ thousands=thousands,
+ decimal=decimal
+ ))
except EmptyDataError: # empty table
continue
return ret
@@ -752,7 +756,8 @@ def _parse(flavor, io, match, header, index_col, skiprows,
def read_html(io, match='.+', flavor=None, header=None, index_col=None,
skiprows=None, attrs=None, parse_dates=False,
- tupleize_cols=False, thousands=',', encoding=None):
+ tupleize_cols=False, thousands=',', encoding=None,
+ decimal='.'):
r"""Read HTML tables into a ``list`` of ``DataFrame`` objects.
Parameters
@@ -828,6 +833,12 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
underlying parser library (e.g., the parser library will try to use
the encoding provided by the document).
+ decimal : str, default '.'
+ Character to recognize as decimal point (e.g. use ',' for European
+ data).
+
+ .. versionadded:: 0.18.2
+
Returns
-------
dfs : list of DataFrames
@@ -871,4 +882,5 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
'data (you passed a negative value)')
_validate_header_arg(header)
return _parse(flavor, io, match, header, index_col, skiprows,
- parse_dates, tupleize_cols, thousands, attrs, encoding)
+ parse_dates, tupleize_cols, thousands, attrs, encoding,
+ decimal)
diff --git a/pandas/io/tests/test_html.py b/pandas/io/tests/test_html.py
index 21d0748fb6aba..edf1eeee7e622 100644
--- a/pandas/io/tests/test_html.py
+++ b/pandas/io/tests/test_html.py
@@ -665,6 +665,28 @@ def test_wikipedia_states_table(self):
result = self.read_html(data, 'Arizona', header=1)[0]
nose.tools.assert_equal(result['sq mi'].dtype, np.dtype('float64'))
+ def test_decimal_rows(self):
+ data = StringIO('''<html>
+ <body>
+ <table>
+ <thead>
+ <tr>
+ <th>Header</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <td>1100#101</td>
+ </tr>
+ </tbody>
+ </table>
+ </body>
+ </html>''')
+ expected = DataFrame(data={'Header': 1100.101}, index=[0])
+ result = self.read_html(data, decimal='#')[0]
+ nose.tools.assert_equal(result['Header'].dtype, np.dtype('float64'))
+ tm.assert_frame_equal(result, expected)
+
def test_bool_header_arg(self):
# GH 6114
for arg in [True, False]:
| - [x] closes #12907
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/13272 | 2016-05-24T18:21:48Z | 2016-05-27T00:15:29Z | null | 2016-05-27T00:15:42Z |
DOC: fixed typos in GroupBy document | diff --git a/doc/source/groupby.rst b/doc/source/groupby.rst
index 4cde1fed344a8..02309fe5d6509 100644
--- a/doc/source/groupby.rst
+++ b/doc/source/groupby.rst
@@ -52,7 +52,7 @@ following:
step and try to return a sensibly combined result if it doesn't fit into
either of the above two categories
-Since the set of object instance method on pandas data structures are generally
+Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or ``itertools``), in which you can write code like:
@@ -129,7 +129,7 @@ columns:
In [5]: grouped = df.groupby(get_letter_type, axis=1)
-Starting with 0.8, pandas Index objects now supports duplicate values. If a
+Starting with 0.8, pandas Index objects now support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
@@ -171,7 +171,8 @@ By default the group keys are sorted during the ``groupby`` operation. You may h
df2.groupby(['X'], sort=False).sum()
-Note that ``groupby`` will preserve the order in which *observations* are sorted *within* each group. For example, the groups created by ``groupby()`` below are in the order the appeared in the original ``DataFrame``:
+Note that ``groupby`` will preserve the order in which *observations* are sorted *within* each group.
+For example, the groups created by ``groupby()`` below are in the order they appeared in the original ``DataFrame``:
.. ipython:: python
@@ -254,7 +255,7 @@ GroupBy with MultiIndex
With :ref:`hierarchically-indexed data <advanced.hierarchical>`, it's quite
natural to group by one of the levels of the hierarchy.
-Let's create a series with a two-level ``MultiIndex``.
+Let's create a Series with a two-level ``MultiIndex``.
.. ipython:: python
@@ -636,7 +637,7 @@ with NaNs.
dff.groupby('B').filter(lambda x: len(x) > 2, dropna=False)
-For dataframes with multiple columns, filters should explicitly specify a column as the filter criterion.
+For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
.. ipython:: python
@@ -755,7 +756,7 @@ The dimension of the returned result can also change:
.. note::
- ``apply`` can act as a reducer, transformer, *or* filter function, depending on exactly what is passed to apply.
+ ``apply`` can act as a reducer, transformer, *or* filter function, depending on exactly what is passed to it.
So depending on the path taken, and exactly what you are grouping. Thus the grouped columns(s) may be included in
the output as well as set the indices.
@@ -789,7 +790,7 @@ Again consider the example DataFrame we've been looking at:
df
-Supposed we wished to compute the standard deviation grouped by the ``A``
+Suppose we wish to compute the standard deviation grouped by the ``A``
column. There is a slight problem, namely that we don't care about the data in
column ``B``. We refer to this as a "nuisance" column. If the passed
aggregation function can't be applied to some columns, the troublesome columns
@@ -1019,7 +1020,7 @@ Returning a Series to propagate names
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Group DataFrame columns, compute a set of metrics and return a named Series.
-The Series name is used as the name for the column index. This is especially
+The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
| - [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/13270 | 2016-05-24T15:31:39Z | 2016-05-24T15:34:58Z | null | 2016-05-24T15:35:02Z |
BUG: Bug in selection from a HDFStore with a fixed format and start and/or stop will now return the selected range | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 2854dbf5e655b..b627c938ecd92 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -79,6 +79,7 @@ Other enhancements
- ``Index.astype()`` now accepts an optional boolean argument ``copy``, which allows optional copying if the requirements on dtype are satisfied (:issue:`13209`)
- ``Categorical.astype()`` now accepts an optional boolean argument ``copy``, effective when dtype is categorical (:issue:`13209`)
+
.. _whatsnew_0182.api:
API changes
@@ -207,6 +208,7 @@ Bug Fixes
- Bug in ``SparseSeries`` and ``SparseDataFrame`` creation with ``object`` dtype may raise ``TypeError`` (:issue:`11633`)
- Bug when passing a not-default-indexed ``Series`` as ``xerr`` or ``yerr`` in ``.plot()`` (:issue:`11858`)
- Bug in matplotlib ``AutoDataFormatter``; this restores the second scaled formatting and re-adds micro-second scaled formatting (:issue:`13131`)
+- Bug in selection from a ``HDFStore`` with a fixed format and ``start`` and/or ``stop`` specified will now return the selected range (:issue:`8287`)
- Bug in ``.groupby(..).resample(..)`` when the same object is called multiple times (:issue:`13174`)
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index d350358081aa7..fcf5125d956c6 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -1314,12 +1314,20 @@ def __init__(self, store, s, func, where, nrows, start=None, stop=None,
self.s = s
self.func = func
self.where = where
- self.nrows = nrows or 0
- self.start = start or 0
- if stop is None:
- stop = self.nrows
- self.stop = min(self.nrows, stop)
+ # set start/stop if they are not set if we are a table
+ if self.s.is_table:
+ if nrows is None:
+ nrows = 0
+ if start is None:
+ start = 0
+ if stop is None:
+ stop = nrows
+ stop = min(nrows, stop)
+
+ self.nrows = nrows
+ self.start = start
+ self.stop = stop
self.coordinates = None
if iterator or chunksize is not None:
@@ -2303,14 +2311,23 @@ def f(values, freq=None, tz=None):
return klass
def validate_read(self, kwargs):
- if kwargs.get('columns') is not None:
+ """
+ remove table keywords from kwargs and return
+ raise if any keywords are passed which are not-None
+ """
+ kwargs = copy.copy(kwargs)
+
+ columns = kwargs.pop('columns', None)
+ if columns is not None:
raise TypeError("cannot pass a column specification when reading "
"a Fixed format store. this store must be "
"selected in its entirety")
- if kwargs.get('where') is not None:
+ where = kwargs.pop('where', None)
+ if where is not None:
raise TypeError("cannot pass a where specification when reading "
"from a Fixed format store. this store must be "
"selected in its entirety")
+ return kwargs
@property
def is_exists(self):
@@ -2329,11 +2346,11 @@ def get_attrs(self):
def write(self, obj, **kwargs):
self.set_attrs()
- def read_array(self, key):
+ def read_array(self, key, start=None, stop=None):
""" read an array for the specified node (off of group """
import tables
node = getattr(self.group, key)
- data = node[:]
+ data = node[start:stop]
attrs = node._v_attrs
transposed = getattr(attrs, 'transposed', False)
@@ -2363,17 +2380,17 @@ def read_array(self, key):
else:
return ret
- def read_index(self, key):
+ def read_index(self, key, **kwargs):
variety = _ensure_decoded(getattr(self.attrs, '%s_variety' % key))
if variety == u('multi'):
- return self.read_multi_index(key)
+ return self.read_multi_index(key, **kwargs)
elif variety == u('block'):
- return self.read_block_index(key)
+ return self.read_block_index(key, **kwargs)
elif variety == u('sparseint'):
- return self.read_sparse_intindex(key)
+ return self.read_sparse_intindex(key, **kwargs)
elif variety == u('regular'):
- _, index = self.read_index_node(getattr(self.group, key))
+ _, index = self.read_index_node(getattr(self.group, key), **kwargs)
return index
else: # pragma: no cover
raise TypeError('unrecognized index variety: %s' % variety)
@@ -2411,19 +2428,19 @@ def write_block_index(self, key, index):
self.write_array('%s_blengths' % key, index.blengths)
setattr(self.attrs, '%s_length' % key, index.length)
- def read_block_index(self, key):
+ def read_block_index(self, key, **kwargs):
length = getattr(self.attrs, '%s_length' % key)
- blocs = self.read_array('%s_blocs' % key)
- blengths = self.read_array('%s_blengths' % key)
+ blocs = self.read_array('%s_blocs' % key, **kwargs)
+ blengths = self.read_array('%s_blengths' % key, **kwargs)
return BlockIndex(length, blocs, blengths)
def write_sparse_intindex(self, key, index):
self.write_array('%s_indices' % key, index.indices)
setattr(self.attrs, '%s_length' % key, index.length)
- def read_sparse_intindex(self, key):
+ def read_sparse_intindex(self, key, **kwargs):
length = getattr(self.attrs, '%s_length' % key)
- indices = self.read_array('%s_indices' % key)
+ indices = self.read_array('%s_indices' % key, **kwargs)
return IntIndex(length, indices)
def write_multi_index(self, key, index):
@@ -2448,7 +2465,7 @@ def write_multi_index(self, key, index):
label_key = '%s_label%d' % (key, i)
self.write_array(label_key, lab)
- def read_multi_index(self, key):
+ def read_multi_index(self, key, **kwargs):
nlevels = getattr(self.attrs, '%s_nlevels' % key)
levels = []
@@ -2456,19 +2473,20 @@ def read_multi_index(self, key):
names = []
for i in range(nlevels):
level_key = '%s_level%d' % (key, i)
- name, lev = self.read_index_node(getattr(self.group, level_key))
+ name, lev = self.read_index_node(getattr(self.group, level_key),
+ **kwargs)
levels.append(lev)
names.append(name)
label_key = '%s_label%d' % (key, i)
- lab = self.read_array(label_key)
+ lab = self.read_array(label_key, **kwargs)
labels.append(lab)
return MultiIndex(levels=levels, labels=labels, names=names,
verify_integrity=True)
- def read_index_node(self, node):
- data = node[:]
+ def read_index_node(self, node, start=None, stop=None):
+ data = node[start:stop]
# If the index was an empty array write_array_empty() will
# have written a sentinel. Here we relace it with the original.
if ('shape' in node._v_attrs and
@@ -2607,9 +2625,9 @@ def write_array(self, key, value, items=None):
class LegacyFixed(GenericFixed):
- def read_index_legacy(self, key):
+ def read_index_legacy(self, key, start=None, stop=None):
node = getattr(self.group, key)
- data = node[:]
+ data = node[start:stop]
kind = node._v_attrs.kind
return _unconvert_index_legacy(data, kind, encoding=self.encoding)
@@ -2617,7 +2635,7 @@ def read_index_legacy(self, key):
class LegacySeriesFixed(LegacyFixed):
def read(self, **kwargs):
- self.validate_read(kwargs)
+ kwargs = self.validate_read(kwargs)
index = self.read_index_legacy('index')
values = self.read_array('values')
return Series(values, index=index)
@@ -2626,7 +2644,7 @@ def read(self, **kwargs):
class LegacyFrameFixed(LegacyFixed):
def read(self, **kwargs):
- self.validate_read(kwargs)
+ kwargs = self.validate_read(kwargs)
index = self.read_index_legacy('index')
columns = self.read_index_legacy('columns')
values = self.read_array('values')
@@ -2645,9 +2663,9 @@ def shape(self):
return None
def read(self, **kwargs):
- self.validate_read(kwargs)
- index = self.read_index('index')
- values = self.read_array('values')
+ kwargs = self.validate_read(kwargs)
+ index = self.read_index('index', **kwargs)
+ values = self.read_array('values', **kwargs)
return Series(values, index=index, name=self.name)
def write(self, obj, **kwargs):
@@ -2657,12 +2675,25 @@ def write(self, obj, **kwargs):
self.attrs.name = obj.name
-class SparseSeriesFixed(GenericFixed):
+class SparseFixed(GenericFixed):
+
+ def validate_read(self, kwargs):
+ """
+ we don't support start, stop kwds in Sparse
+ """
+ kwargs = super(SparseFixed, self).validate_read(kwargs)
+ if 'start' in kwargs or 'stop' in kwargs:
+ raise NotImplementedError("start and/or stop are not supported "
+ "in fixed Sparse reading")
+ return kwargs
+
+
+class SparseSeriesFixed(SparseFixed):
pandas_kind = u('sparse_series')
attributes = ['name', 'fill_value', 'kind']
def read(self, **kwargs):
- self.validate_read(kwargs)
+ kwargs = self.validate_read(kwargs)
index = self.read_index('index')
sp_values = self.read_array('sp_values')
sp_index = self.read_index('sp_index')
@@ -2681,12 +2712,12 @@ def write(self, obj, **kwargs):
self.attrs.kind = obj.kind
-class SparseFrameFixed(GenericFixed):
+class SparseFrameFixed(SparseFixed):
pandas_kind = u('sparse_frame')
attributes = ['default_kind', 'default_fill_value']
def read(self, **kwargs):
- self.validate_read(kwargs)
+ kwargs = self.validate_read(kwargs)
columns = self.read_index('columns')
sdict = {}
for c in columns:
@@ -2714,12 +2745,12 @@ def write(self, obj, **kwargs):
self.write_index('columns', obj.columns)
-class SparsePanelFixed(GenericFixed):
+class SparsePanelFixed(SparseFixed):
pandas_kind = u('sparse_panel')
attributes = ['default_kind', 'default_fill_value']
def read(self, **kwargs):
- self.validate_read(kwargs)
+ kwargs = self.validate_read(kwargs)
items = self.read_index('items')
sdict = {}
@@ -2782,19 +2813,26 @@ def shape(self):
except:
return None
- def read(self, **kwargs):
- self.validate_read(kwargs)
+ def read(self, start=None, stop=None, **kwargs):
+ # start, stop applied to rows, so 0th axis only
+
+ kwargs = self.validate_read(kwargs)
+ select_axis = self.obj_type()._get_block_manager_axis(0)
axes = []
for i in range(self.ndim):
- ax = self.read_index('axis%d' % i)
+
+ _start, _stop = (start, stop) if i == select_axis else (None, None)
+ ax = self.read_index('axis%d' % i, start=_start, stop=_stop)
axes.append(ax)
items = axes[0]
blocks = []
for i in range(self.nblocks):
+
blk_items = self.read_index('block%d_items' % i)
- values = self.read_array('block%d_values' % i)
+ values = self.read_array('block%d_values' % i,
+ start=_start, stop=_stop)
blk = make_block(values,
placement=items.get_indexer(blk_items))
blocks.append(blk)
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 5ee84ce97979a..4c72a47dbdf6e 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -4128,10 +4128,11 @@ def test_nan_selection_bug_4858(self):
result = store.select('df', where='values>2.0')
assert_frame_equal(result, expected)
- def test_start_stop(self):
+ def test_start_stop_table(self):
with ensure_clean_store(self.path) as store:
+ # table
df = DataFrame(dict(A=np.random.rand(20), B=np.random.rand(20)))
store.append('df', df)
@@ -4143,8 +4144,55 @@ def test_start_stop(self):
# out of range
result = store.select(
'df', [Term("columns=['A']")], start=30, stop=40)
- assert(len(result) == 0)
- assert(type(result) == DataFrame)
+ self.assertTrue(len(result) == 0)
+ expected = df.ix[30:40, ['A']]
+ tm.assert_frame_equal(result, expected)
+
+ def test_start_stop_fixed(self):
+
+ with ensure_clean_store(self.path) as store:
+
+ # fixed, GH 8287
+ df = DataFrame(dict(A=np.random.rand(20),
+ B=np.random.rand(20)),
+ index=pd.date_range('20130101', periods=20))
+ store.put('df', df)
+
+ result = store.select(
+ 'df', start=0, stop=5)
+ expected = df.iloc[0:5, :]
+ tm.assert_frame_equal(result, expected)
+
+ result = store.select(
+ 'df', start=5, stop=10)
+ expected = df.iloc[5:10, :]
+ tm.assert_frame_equal(result, expected)
+
+ # out of range
+ result = store.select(
+ 'df', start=30, stop=40)
+ expected = df.iloc[30:40, :]
+ tm.assert_frame_equal(result, expected)
+
+ # series
+ s = df.A
+ store.put('s', s)
+ result = store.select('s', start=0, stop=5)
+ expected = s.iloc[0:5]
+ tm.assert_series_equal(result, expected)
+
+ result = store.select('s', start=5, stop=10)
+ expected = s.iloc[5:10]
+ tm.assert_series_equal(result, expected)
+
+ # sparse; not implemented
+ df = tm.makeDataFrame()
+ df.ix[3:5, 1:3] = np.nan
+ df.ix[8:10, -2] = np.nan
+ dfs = df.to_sparse()
+ store.put('dfs', dfs)
+ with self.assertRaises(NotImplementedError):
+ store.select('dfs', start=0, stop=5)
def test_select_filter_corner(self):
| closes #8287
| https://api.github.com/repos/pandas-dev/pandas/pulls/13267 | 2016-05-24T12:58:38Z | 2016-05-24T15:28:17Z | null | 2016-05-24T15:28:17Z |
TST: assert_dict_equal to check input type | diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 083da2a040ed5..1d043297aa1fa 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -220,8 +220,14 @@ def test_constructor_dict(self):
frame = DataFrame({'col1': self.ts1,
'col2': self.ts2})
- tm.assert_dict_equal(self.ts1, frame['col1'], compare_keys=False)
- tm.assert_dict_equal(self.ts2, frame['col2'], compare_keys=False)
+ # col2 is padded with NaN
+ self.assertEqual(len(self.ts1), 30)
+ self.assertEqual(len(self.ts2), 25)
+
+ tm.assert_series_equal(self.ts1, frame['col1'], check_names=False)
+ exp = pd.Series(np.concatenate([[np.nan] * 5, self.ts2.values]),
+ index=self.ts1.index, name='col2')
+ tm.assert_series_equal(exp, frame['col2'])
frame = DataFrame({'col1': self.ts1,
'col2': self.ts2},
diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py
index 1e3940dc8f038..ca1ebe477e903 100644
--- a/pandas/tests/frame/test_indexing.py
+++ b/pandas/tests/frame/test_indexing.py
@@ -393,13 +393,17 @@ def test_setitem(self):
series = self.frame['A'][::2]
self.frame['col5'] = series
self.assertIn('col5', self.frame)
- tm.assert_dict_equal(series, self.frame['col5'],
- compare_keys=False)
+
+ self.assertEqual(len(series), 15)
+ self.assertEqual(len(self.frame), 30)
+
+ exp = np.ravel(np.column_stack((series.values, [np.nan] * 15)))
+ exp = Series(exp, index=self.frame.index, name='col5')
+ tm.assert_series_equal(self.frame['col5'], exp)
series = self.frame['A']
self.frame['col6'] = series
- tm.assert_dict_equal(series, self.frame['col6'],
- compare_keys=False)
+ tm.assert_series_equal(series, self.frame['col6'], check_names=False)
with tm.assertRaises(KeyError):
self.frame[randn(len(self.frame) + 1)] = 1
diff --git a/pandas/tests/frame/test_operators.py b/pandas/tests/frame/test_operators.py
index cd2a0fbeefae3..7dfada0d868fe 100644
--- a/pandas/tests/frame/test_operators.py
+++ b/pandas/tests/frame/test_operators.py
@@ -724,9 +724,14 @@ def test_combineFrame(self):
frame_copy['C'][:5] = nan
added = self.frame + frame_copy
- tm.assert_dict_equal(added['A'].valid(),
- self.frame['A'] * 2,
- compare_keys=False)
+
+ indexer = added['A'].valid().index
+ exp = (self.frame['A'] * 2).copy()
+
+ tm.assert_series_equal(added['A'].valid(), exp.loc[indexer])
+
+ exp.loc[~exp.index.isin(indexer)] = np.nan
+ tm.assert_series_equal(added['A'], exp.loc[added['A'].index])
self.assertTrue(
np.isnan(added['C'].reindex(frame_copy.index)[:5]).all())
diff --git a/pandas/tests/frame/test_timeseries.py b/pandas/tests/frame/test_timeseries.py
index 820076e2c6fd5..b9baae6cbeda7 100644
--- a/pandas/tests/frame/test_timeseries.py
+++ b/pandas/tests/frame/test_timeseries.py
@@ -120,13 +120,13 @@ def test_pct_change_shift_over_nas(self):
def test_shift(self):
# naive shift
shiftedFrame = self.tsframe.shift(5)
- self.assertTrue(shiftedFrame.index.equals(self.tsframe.index))
+ self.assert_index_equal(shiftedFrame.index, self.tsframe.index)
shiftedSeries = self.tsframe['A'].shift(5)
assert_series_equal(shiftedFrame['A'], shiftedSeries)
shiftedFrame = self.tsframe.shift(-5)
- self.assertTrue(shiftedFrame.index.equals(self.tsframe.index))
+ self.assert_index_equal(shiftedFrame.index, self.tsframe.index)
shiftedSeries = self.tsframe['A'].shift(-5)
assert_series_equal(shiftedFrame['A'], shiftedSeries)
@@ -154,10 +154,10 @@ def test_shift(self):
ps = tm.makePeriodFrame()
shifted = ps.shift(1)
unshifted = shifted.shift(-1)
- self.assertTrue(shifted.index.equals(ps.index))
-
- tm.assert_dict_equal(unshifted.ix[:, 0].valid(), ps.ix[:, 0],
- compare_keys=False)
+ self.assert_index_equal(shifted.index, ps.index)
+ self.assert_index_equal(unshifted.index, ps.index)
+ tm.assert_numpy_array_equal(unshifted.ix[:, 0].valid().values,
+ ps.ix[:-1, 0].values)
shifted2 = ps.shift(1, 'B')
shifted3 = ps.shift(1, datetools.bday)
diff --git a/pandas/tests/series/test_combine_concat.py b/pandas/tests/series/test_combine_concat.py
index 72f1cac219998..48224c7bfbd63 100644
--- a/pandas/tests/series/test_combine_concat.py
+++ b/pandas/tests/series/test_combine_concat.py
@@ -65,8 +65,9 @@ def test_combine_first(self):
combined = strings.combine_first(floats)
- tm.assert_dict_equal(strings, combined, compare_keys=False)
- tm.assert_dict_equal(floats[1::2], combined, compare_keys=False)
+ tm.assert_series_equal(strings, combined.loc[index[::2]])
+ tm.assert_series_equal(floats[1::2].astype(object),
+ combined.loc[index[1::2]])
# corner case
s = Series([1., 2, 3], index=[0, 1, 2])
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index dec4f878d7d56..e27a21e6d5903 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -433,8 +433,8 @@ def test_valid(self):
result = ts.valid()
self.assertEqual(len(result), ts.count())
-
- tm.assert_dict_equal(result, ts, compare_keys=False)
+ tm.assert_series_equal(result, ts[1::2])
+ tm.assert_series_equal(result, ts[pd.notnull(ts)])
def test_isnull(self):
ser = Series([0, 5.4, 3, nan, -0.001])
diff --git a/pandas/tests/series/test_timeseries.py b/pandas/tests/series/test_timeseries.py
index de62fb4ab6f07..463063016f1e9 100644
--- a/pandas/tests/series/test_timeseries.py
+++ b/pandas/tests/series/test_timeseries.py
@@ -25,7 +25,10 @@ def test_shift(self):
shifted = self.ts.shift(1)
unshifted = shifted.shift(-1)
- tm.assert_dict_equal(unshifted.valid(), self.ts, compare_keys=False)
+ tm.assert_index_equal(shifted.index, self.ts.index)
+ tm.assert_index_equal(unshifted.index, self.ts.index)
+ tm.assert_numpy_array_equal(unshifted.valid().values,
+ self.ts.values[:-1])
offset = datetools.bday
shifted = self.ts.shift(1, freq=offset)
@@ -49,7 +52,9 @@ def test_shift(self):
ps = tm.makePeriodSeries()
shifted = ps.shift(1)
unshifted = shifted.shift(-1)
- tm.assert_dict_equal(unshifted.valid(), ps, compare_keys=False)
+ tm.assert_index_equal(shifted.index, ps.index)
+ tm.assert_index_equal(unshifted.index, ps.index)
+ tm.assert_numpy_array_equal(unshifted.valid().values, ps.values[:-1])
shifted2 = ps.shift(1, 'B')
shifted3 = ps.shift(1, datetools.bday)
@@ -77,16 +82,16 @@ def test_shift(self):
# xref 8260
# with tz
- s = Series(
- date_range('2000-01-01 09:00:00', periods=5,
- tz='US/Eastern'), name='foo')
+ s = Series(date_range('2000-01-01 09:00:00', periods=5,
+ tz='US/Eastern'), name='foo')
result = s - s.shift()
- assert_series_equal(result, Series(
- TimedeltaIndex(['NaT'] + ['1 days'] * 4), name='foo'))
+
+ exp = Series(TimedeltaIndex(['NaT'] + ['1 days'] * 4), name='foo')
+ assert_series_equal(result, exp)
# incompat tz
- s2 = Series(
- date_range('2000-01-01 09:00:00', periods=5, tz='CET'), name='foo')
+ s2 = Series(date_range('2000-01-01 09:00:00', periods=5,
+ tz='CET'), name='foo')
self.assertRaises(ValueError, lambda: s - s2)
def test_tshift(self):
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 39b4cca85ad9c..dd66d732ba684 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -131,7 +131,13 @@ def assert_almost_equal(left, right, check_exact=False, **kwargs):
return _testing.assert_almost_equal(left, right, **kwargs)
-assert_dict_equal = _testing.assert_dict_equal
+def assert_dict_equal(left, right, compare_keys=True):
+
+ # instance validation
+ assertIsInstance(left, dict, '[dict] ')
+ assertIsInstance(right, dict, '[dict] ')
+
+ return _testing.assert_dict_equal(left, right, compare_keys=compare_keys)
def randbool(size=(), p=0.5):
| - [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
`assert_dict_equal` now checks input is `dict` instance. Fixed some tests which passes `Series` to it.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13264 | 2016-05-23T22:16:04Z | 2016-05-24T01:11:06Z | null | 2016-05-24T04:29:16Z |
TST/CLN: remove np.assert_equal | diff --git a/ci/lint.sh b/ci/lint.sh
index 6b8f160fc90db..4b9cd624b2b36 100755
--- a/ci/lint.sh
+++ b/ci/lint.sh
@@ -15,7 +15,12 @@ if [ "$LINT" ]; then
if [ $? -ne "0" ]; then
RET=1
fi
+
done
+ grep -r --include '*.py' --exclude nosetester.py --exclude testing.py 'numpy.testing' pandas
+ if [ $? = "0" ]; then
+ RET=1
+ fi
else
echo "NOT Linting"
fi
diff --git a/pandas/computation/tests/test_eval.py b/pandas/computation/tests/test_eval.py
index 143e6017b462a..023519fd7fc20 100644
--- a/pandas/computation/tests/test_eval.py
+++ b/pandas/computation/tests/test_eval.py
@@ -12,8 +12,6 @@
from numpy.random import randn, rand, randint
import numpy as np
-from numpy.testing import assert_allclose
-from numpy.testing.decorators import slow
import pandas as pd
from pandas.core import common as com
@@ -33,7 +31,8 @@
import pandas.lib as lib
from pandas.util.testing import (assert_frame_equal, randbool,
assertRaisesRegexp, assert_numpy_array_equal,
- assert_produces_warning, assert_series_equal)
+ assert_produces_warning, assert_series_equal,
+ slow)
from pandas.compat import PY3, u, reduce
_series_frame_incompatible = _bool_ops_syms
@@ -280,9 +279,13 @@ def check_modulus(self, lhs, arith1, rhs):
ex = 'lhs {0} rhs'.format(arith1)
result = pd.eval(ex, engine=self.engine, parser=self.parser)
expected = lhs % rhs
- assert_allclose(result, expected)
+
+ tm.assert_almost_equal(result, expected)
expected = self.ne.evaluate('expected {0} rhs'.format(arith1))
- assert_allclose(result, expected)
+ if isinstance(result, (DataFrame, Series)):
+ tm.assert_almost_equal(result.values, expected)
+ else:
+ tm.assert_almost_equal(result, expected.item())
def check_floor_division(self, lhs, arith1, rhs):
ex = 'lhs {0} rhs'.format(arith1)
@@ -319,13 +322,13 @@ def check_pow(self, lhs, arith1, rhs):
self.assertRaises(AssertionError, tm.assert_numpy_array_equal,
result, expected)
else:
- assert_allclose(result, expected)
+ tm.assert_almost_equal(result, expected)
ex = '(lhs {0} rhs) {0} rhs'.format(arith1)
result = pd.eval(ex, engine=self.engine, parser=self.parser)
expected = self.get_expected_pow_result(
self.get_expected_pow_result(lhs, rhs), rhs)
- assert_allclose(result, expected)
+ tm.assert_almost_equal(result, expected)
def check_single_invert_op(self, lhs, cmp1, rhs):
# simple
@@ -701,10 +704,10 @@ def check_modulus(self, lhs, arith1, rhs):
result = pd.eval(ex, engine=self.engine, parser=self.parser)
expected = lhs % rhs
- assert_allclose(result, expected)
+ tm.assert_almost_equal(result, expected)
expected = _eval_single_bin(expected, arith1, rhs, self.engine)
- assert_allclose(result, expected)
+ tm.assert_almost_equal(result, expected)
def check_alignment(self, result, nlhs, ghs, op):
try:
@@ -1578,7 +1581,7 @@ def test_binary_functions(self):
expr = "{0}(a, b)".format(fn)
got = self.eval(expr)
expect = getattr(np, fn)(a, b)
- np.testing.assert_allclose(got, expect)
+ tm.assert_almost_equal(got, expect, check_names=False)
def test_df_use_case(self):
df = DataFrame({'a': np.random.randn(10),
diff --git a/pandas/io/tests/json/test_pandas.py b/pandas/io/tests/json/test_pandas.py
index 6fe559e5cacd8..cad469de86fe9 100644
--- a/pandas/io/tests/json/test_pandas.py
+++ b/pandas/io/tests/json/test_pandas.py
@@ -87,7 +87,7 @@ def test_frame_double_encoded_labels(self):
orient='index'))
df_unser = read_json(df.to_json(orient='records'), orient='records')
assert_index_equal(df.columns, df_unser.columns)
- np.testing.assert_equal(df.values, df_unser.values)
+ tm.assert_numpy_array_equal(df.values, df_unser.values)
def test_frame_non_unique_index(self):
df = DataFrame([['a', 'b'], ['c', 'd']], index=[1, 1],
@@ -100,9 +100,9 @@ def test_frame_non_unique_index(self):
orient='split'))
unser = read_json(df.to_json(orient='records'), orient='records')
self.assertTrue(df.columns.equals(unser.columns))
- np.testing.assert_equal(df.values, unser.values)
+ tm.assert_numpy_array_equal(df.values, unser.values)
unser = read_json(df.to_json(orient='values'), orient='values')
- np.testing.assert_equal(df.values, unser.values)
+ tm.assert_numpy_array_equal(df.values, unser.values)
def test_frame_non_unique_columns(self):
df = DataFrame([['a', 'b'], ['c', 'd']], index=[1, 2],
@@ -115,7 +115,7 @@ def test_frame_non_unique_columns(self):
assert_frame_equal(df, read_json(df.to_json(orient='split'),
orient='split', dtype=False))
unser = read_json(df.to_json(orient='values'), orient='values')
- np.testing.assert_equal(df.values, unser.values)
+ tm.assert_numpy_array_equal(df.values, unser.values)
# GH4377; duplicate columns not processing correctly
df = DataFrame([['a', 'b'], ['c', 'd']], index=[
@@ -487,7 +487,7 @@ def test_series_non_unique_index(self):
orient='split', typ='series'))
unser = read_json(s.to_json(orient='records'),
orient='records', typ='series')
- np.testing.assert_equal(s.values, unser.values)
+ tm.assert_numpy_array_equal(s.values, unser.values)
def test_series_from_json_to_json(self):
diff --git a/pandas/io/tests/json/test_ujson.py b/pandas/io/tests/json/test_ujson.py
index babcd910a2edd..8e4b492c984f1 100644
--- a/pandas/io/tests/json/test_ujson.py
+++ b/pandas/io/tests/json/test_ujson.py
@@ -21,8 +21,6 @@
import pandas.compat as compat
import numpy as np
-from numpy.testing import (assert_array_almost_equal_nulp,
- assert_approx_equal)
from pandas import DataFrame, Series, Index, NaT, DatetimeIndex
import pandas.util.testing as tm
@@ -1015,19 +1013,19 @@ def testFloatArray(self):
inpt = arr.astype(dtype)
outp = np.array(ujson.decode(ujson.encode(
inpt, double_precision=15)), dtype=dtype)
- assert_array_almost_equal_nulp(inpt, outp)
+ tm.assert_almost_equal(inpt, outp)
def testFloatMax(self):
num = np.float(np.finfo(np.float).max / 10)
- assert_approx_equal(np.float(ujson.decode(
+ tm.assert_almost_equal(np.float(ujson.decode(
ujson.encode(num, double_precision=15))), num, 15)
num = np.float32(np.finfo(np.float32).max / 10)
- assert_approx_equal(np.float32(ujson.decode(
+ tm.assert_almost_equal(np.float32(ujson.decode(
ujson.encode(num, double_precision=15))), num, 15)
num = np.float64(np.finfo(np.float64).max / 10)
- assert_approx_equal(np.float64(ujson.decode(
+ tm.assert_almost_equal(np.float64(ujson.decode(
ujson.encode(num, double_precision=15))), num, 15)
def testArrays(self):
@@ -1067,9 +1065,9 @@ def testArrays(self):
arr = np.arange(100.202, 200.202, 1, dtype=np.float32)
arr = arr.reshape((5, 5, 4))
outp = np.array(ujson.decode(ujson.encode(arr)), dtype=np.float32)
- assert_array_almost_equal_nulp(arr, outp)
+ tm.assert_almost_equal(arr, outp)
outp = ujson.decode(ujson.encode(arr), numpy=True, dtype=np.float32)
- assert_array_almost_equal_nulp(arr, outp)
+ tm.assert_almost_equal(arr, outp)
def testOdArray(self):
def will_raise():
diff --git a/pandas/io/tests/parser/common.py b/pandas/io/tests/parser/common.py
index 3912bbbf11e53..2be0c4edb8f5d 100644
--- a/pandas/io/tests/parser/common.py
+++ b/pandas/io/tests/parser/common.py
@@ -10,7 +10,6 @@
import nose
import numpy as np
-from numpy.testing.decorators import slow
from pandas.lib import Timestamp
import pandas as pd
@@ -607,7 +606,7 @@ def test_url(self):
tm.assert_frame_equal(url_table, local_table)
# TODO: ftp testing
- @slow
+ @tm.slow
def test_file(self):
# FILE
diff --git a/pandas/io/tests/test_excel.py b/pandas/io/tests/test_excel.py
index af053450d78c4..b7e5360a6f3db 100644
--- a/pandas/io/tests/test_excel.py
+++ b/pandas/io/tests/test_excel.py
@@ -13,7 +13,6 @@
from numpy import nan
import numpy as np
-from numpy.testing.decorators import slow
import pandas as pd
from pandas import DataFrame, Index, MultiIndex
@@ -544,7 +543,7 @@ def test_read_from_s3_url(self):
local_table = self.get_exceldf('test1')
tm.assert_frame_equal(url_table, local_table)
- @slow
+ @tm.slow
def test_read_from_file_url(self):
# FILE
@@ -1102,9 +1101,9 @@ def test_sheets(self):
tm.assert_frame_equal(self.frame, recons)
recons = read_excel(reader, 'test2', index_col=0)
tm.assert_frame_equal(self.tsframe, recons)
- np.testing.assert_equal(2, len(reader.sheet_names))
- np.testing.assert_equal('test1', reader.sheet_names[0])
- np.testing.assert_equal('test2', reader.sheet_names[1])
+ self.assertEqual(2, len(reader.sheet_names))
+ self.assertEqual('test1', reader.sheet_names[0])
+ self.assertEqual('test2', reader.sheet_names[1])
def test_colaliases(self):
_skip_if_no_xlrd()
diff --git a/pandas/io/tests/test_ga.py b/pandas/io/tests/test_ga.py
index b8b698691a9f5..469e121f633d7 100644
--- a/pandas/io/tests/test_ga.py
+++ b/pandas/io/tests/test_ga.py
@@ -7,8 +7,8 @@
import nose
import pandas as pd
from pandas import compat
-from pandas.util.testing import network, assert_frame_equal, with_connectivity_check
-from numpy.testing.decorators import slow
+from pandas.util.testing import (network, assert_frame_equal,
+ with_connectivity_check, slow)
import pandas.util.testing as tm
if compat.PY3:
diff --git a/pandas/io/tests/test_html.py b/pandas/io/tests/test_html.py
index 21d0748fb6aba..9b68267a0a0a8 100644
--- a/pandas/io/tests/test_html.py
+++ b/pandas/io/tests/test_html.py
@@ -16,7 +16,6 @@
import numpy as np
from numpy.random import rand
-from numpy.testing.decorators import slow
from pandas import (DataFrame, MultiIndex, read_csv, Timestamp, Index,
date_range, Series)
@@ -129,7 +128,7 @@ def test_spam_url(self):
assert_framelist_equal(df1, df2)
- @slow
+ @tm.slow
def test_banklist(self):
df1 = self.read_html(self.banklist_data, '.*Florida.*',
attrs={'id': 'table'})
@@ -289,9 +288,9 @@ def test_invalid_url(self):
self.read_html('http://www.a23950sdfa908sd.com',
match='.*Water.*')
except ValueError as e:
- tm.assert_equal(str(e), 'No tables found')
+ self.assertEqual(str(e), 'No tables found')
- @slow
+ @tm.slow
def test_file_url(self):
url = self.banklist_data
dfs = self.read_html(file_path_to_url(url), 'First',
@@ -300,7 +299,7 @@ def test_file_url(self):
for df in dfs:
tm.assertIsInstance(df, DataFrame)
- @slow
+ @tm.slow
def test_invalid_table_attrs(self):
url = self.banklist_data
with tm.assertRaisesRegexp(ValueError, 'No tables found'):
@@ -311,39 +310,39 @@ def _bank_data(self, *args, **kwargs):
return self.read_html(self.banklist_data, 'Metcalf',
attrs={'id': 'table'}, *args, **kwargs)
- @slow
+ @tm.slow
def test_multiindex_header(self):
df = self._bank_data(header=[0, 1])[0]
tm.assertIsInstance(df.columns, MultiIndex)
- @slow
+ @tm.slow
def test_multiindex_index(self):
df = self._bank_data(index_col=[0, 1])[0]
tm.assertIsInstance(df.index, MultiIndex)
- @slow
+ @tm.slow
def test_multiindex_header_index(self):
df = self._bank_data(header=[0, 1], index_col=[0, 1])[0]
tm.assertIsInstance(df.columns, MultiIndex)
tm.assertIsInstance(df.index, MultiIndex)
- @slow
+ @tm.slow
def test_multiindex_header_skiprows_tuples(self):
df = self._bank_data(header=[0, 1], skiprows=1, tupleize_cols=True)[0]
tm.assertIsInstance(df.columns, Index)
- @slow
+ @tm.slow
def test_multiindex_header_skiprows(self):
df = self._bank_data(header=[0, 1], skiprows=1)[0]
tm.assertIsInstance(df.columns, MultiIndex)
- @slow
+ @tm.slow
def test_multiindex_header_index_skiprows(self):
df = self._bank_data(header=[0, 1], index_col=[0, 1], skiprows=1)[0]
tm.assertIsInstance(df.index, MultiIndex)
tm.assertIsInstance(df.columns, MultiIndex)
- @slow
+ @tm.slow
def test_regex_idempotency(self):
url = self.banklist_data
dfs = self.read_html(file_path_to_url(url),
@@ -371,7 +370,7 @@ def test_python_docs_table(self):
zz = [df.iloc[0, 0][0:4] for df in dfs]
self.assertEqual(sorted(zz), sorted(['Repo', 'What']))
- @slow
+ @tm.slow
def test_thousands_macau_stats(self):
all_non_nan_table_index = -2
macau_data = os.path.join(DATA_PATH, 'macau.html')
@@ -381,7 +380,7 @@ def test_thousands_macau_stats(self):
self.assertFalse(any(s.isnull().any() for _, s in df.iteritems()))
- @slow
+ @tm.slow
def test_thousands_macau_index_col(self):
all_non_nan_table_index = -2
macau_data = os.path.join(DATA_PATH, 'macau.html')
@@ -522,7 +521,7 @@ def test_nyse_wsj_commas_table(self):
self.assertEqual(df.shape[0], nrows)
self.assertTrue(df.columns.equals(columns))
- @slow
+ @tm.slow
def test_banklist_header(self):
from pandas.io.html import _remove_whitespace
@@ -561,7 +560,7 @@ def try_remove_ws(x):
coerce=True)
tm.assert_frame_equal(converted, gtnew)
- @slow
+ @tm.slow
def test_gold_canyon(self):
gc = 'Gold Canyon'
with open(self.banklist_data, 'r') as f:
@@ -663,7 +662,7 @@ def test_wikipedia_states_table(self):
assert os.path.isfile(data), '%r is not a file' % data
assert os.path.getsize(data), '%r is an empty file' % data
result = self.read_html(data, 'Arizona', header=1)[0]
- nose.tools.assert_equal(result['sq mi'].dtype, np.dtype('float64'))
+ self.assertEqual(result['sq mi'].dtype, np.dtype('float64'))
def test_bool_header_arg(self):
# GH 6114
@@ -753,7 +752,7 @@ def test_works_on_valid_markup(self):
tm.assertIsInstance(dfs, list)
tm.assertIsInstance(dfs[0], DataFrame)
- @slow
+ @tm.slow
def test_fallback_success(self):
_skip_if_none_of(('bs4', 'html5lib'))
banklist_data = os.path.join(DATA_PATH, 'banklist.html')
@@ -796,7 +795,7 @@ def get_elements_from_file(url, element='table'):
return soup.find_all(element)
-@slow
[email protected]
def test_bs4_finds_tables():
filepath = os.path.join(DATA_PATH, "spam.html")
with warnings.catch_warnings():
@@ -811,13 +810,13 @@ def get_lxml_elements(url, element):
return doc.xpath('.//{0}'.format(element))
-@slow
[email protected]
def test_lxml_finds_tables():
filepath = os.path.join(DATA_PATH, "spam.html")
assert get_lxml_elements(filepath, 'table')
-@slow
[email protected]
def test_lxml_finds_tbody():
filepath = os.path.join(DATA_PATH, "spam.html")
assert get_lxml_elements(filepath, 'tbody')
diff --git a/pandas/io/tests/test_stata.py b/pandas/io/tests/test_stata.py
index 17f74d5789298..830c68d62efad 100644
--- a/pandas/io/tests/test_stata.py
+++ b/pandas/io/tests/test_stata.py
@@ -179,7 +179,7 @@ def test_read_dta2(self):
w = [x for x in w if x.category is UserWarning]
# should get warning for each call to read_dta
- tm.assert_equal(len(w), 3)
+ self.assertEqual(len(w), 3)
# buggy test because of the NaT comparison on certain platforms
# Format 113 test fails since it does not support tc and tC formats
@@ -375,7 +375,7 @@ def test_read_write_dta11(self):
with warnings.catch_warnings(record=True) as w:
original.to_stata(path, None)
# should get a warning for that format.
- tm.assert_equal(len(w), 1)
+ self.assertEqual(len(w), 1)
written_and_read_again = self.read_dta(path)
tm.assert_frame_equal(
@@ -403,7 +403,7 @@ def test_read_write_dta12(self):
with warnings.catch_warnings(record=True) as w:
original.to_stata(path, None)
# should get a warning for that format.
- tm.assert_equal(len(w), 1)
+ self.assertEqual(len(w), 1)
written_and_read_again = self.read_dta(path)
tm.assert_frame_equal(
@@ -904,7 +904,7 @@ def test_categorical_warnings_and_errors(self):
with warnings.catch_warnings(record=True) as w:
original.to_stata(path)
# should get a warning for mixed content
- tm.assert_equal(len(w), 1)
+ self.assertEqual(len(w), 1)
def test_categorical_with_stata_missing_values(self):
values = [['a' + str(i)] for i in range(120)]
@@ -986,10 +986,10 @@ def test_categorical_ordering(self):
for col in parsed_115:
if not is_categorical_dtype(parsed_115[col]):
continue
- tm.assert_equal(True, parsed_115[col].cat.ordered)
- tm.assert_equal(True, parsed_117[col].cat.ordered)
- tm.assert_equal(False, parsed_115_unordered[col].cat.ordered)
- tm.assert_equal(False, parsed_117_unordered[col].cat.ordered)
+ self.assertEqual(True, parsed_115[col].cat.ordered)
+ self.assertEqual(True, parsed_117[col].cat.ordered)
+ self.assertEqual(False, parsed_115_unordered[col].cat.ordered)
+ self.assertEqual(False, parsed_117_unordered[col].cat.ordered)
def test_read_chunks_117(self):
files_117 = [self.dta1_117, self.dta2_117, self.dta3_117,
diff --git a/pandas/io/tests/test_wb.py b/pandas/io/tests/test_wb.py
index 58386c3f1c145..42884b19de03a 100644
--- a/pandas/io/tests/test_wb.py
+++ b/pandas/io/tests/test_wb.py
@@ -6,7 +6,6 @@
from pandas.compat import u
from pandas.util.testing import network
from pandas.util.testing import assert_frame_equal
-from numpy.testing.decorators import slow
import pandas.util.testing as tm
# deprecated
@@ -15,7 +14,7 @@
class TestWB(tm.TestCase):
- @slow
+ @tm.slow
@network
def test_wdi_search(self):
@@ -26,7 +25,7 @@ def test_wdi_search(self):
result = search('gdp.*capita.*constant')
self.assertTrue(result.name.str.contains('GDP').any())
- @slow
+ @tm.slow
@network
def test_wdi_download(self):
@@ -55,7 +54,7 @@ def test_wdi_download(self):
expected.index = result.index
assert_frame_equal(result, pandas.DataFrame(expected))
- @slow
+ @tm.slow
@network
def test_wdi_download_w_retired_indicator(self):
@@ -85,7 +84,7 @@ def test_wdi_download_w_retired_indicator(self):
if len(result) > 0:
raise nose.SkipTest("Invalid results")
- @slow
+ @tm.slow
@network
def test_wdi_download_w_crash_inducing_countrycode(self):
@@ -103,7 +102,7 @@ def test_wdi_download_w_crash_inducing_countrycode(self):
if len(result) > 0:
raise nose.SkipTest("Invalid results")
- @slow
+ @tm.slow
@network
def test_wdi_get_countries(self):
result = get_countries()
diff --git a/pandas/sparse/tests/test_libsparse.py b/pandas/sparse/tests/test_libsparse.py
index 352355fd55c23..6edae66d4e55b 100644
--- a/pandas/sparse/tests/test_libsparse.py
+++ b/pandas/sparse/tests/test_libsparse.py
@@ -3,7 +3,6 @@
import nose # noqa
import numpy as np
import operator
-from numpy.testing import assert_equal
import pandas.util.testing as tm
from pandas import compat
@@ -51,14 +50,15 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
yindex = BlockIndex(TEST_LENGTH, yloc, ylen)
bresult = xindex.make_union(yindex)
assert (isinstance(bresult, BlockIndex))
- assert_equal(bresult.blocs, eloc)
- assert_equal(bresult.blengths, elen)
+ tm.assert_numpy_array_equal(bresult.blocs, eloc)
+ tm.assert_numpy_array_equal(bresult.blengths, elen)
ixindex = xindex.to_int_index()
iyindex = yindex.to_int_index()
iresult = ixindex.make_union(iyindex)
assert (isinstance(iresult, IntIndex))
- assert_equal(iresult.indices, bresult.to_int_index().indices)
+ tm.assert_numpy_array_equal(iresult.indices,
+ bresult.to_int_index().indices)
"""
x: ----
@@ -411,7 +411,7 @@ def test_to_int_index(self):
block = BlockIndex(20, locs, lengths)
dense = block.to_int_index()
- assert_equal(dense.indices, exp_inds)
+ tm.assert_numpy_array_equal(dense.indices, exp_inds)
def test_to_block_index(self):
index = BlockIndex(10, [0, 5], [4, 5])
@@ -489,7 +489,7 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
ydindex, yfill)
self.assertTrue(rb_index.to_int_index().equals(ri_index))
- assert_equal(result_block_vals, result_int_vals)
+ tm.assert_numpy_array_equal(result_block_vals, result_int_vals)
# check versus Series...
xseries = Series(x, xdindex.indices)
@@ -501,8 +501,9 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen):
series_result = python_op(xseries, yseries)
series_result = series_result.reindex(ri_index.indices)
- assert_equal(result_block_vals, series_result.values)
- assert_equal(result_int_vals, series_result.values)
+ tm.assert_numpy_array_equal(result_block_vals,
+ series_result.values)
+ tm.assert_numpy_array_equal(result_int_vals, series_result.values)
check_cases(_check_case)
diff --git a/pandas/sparse/tests/test_series.py b/pandas/sparse/tests/test_series.py
index 5cbc509b836db..58e3dfbdf66e4 100644
--- a/pandas/sparse/tests/test_series.py
+++ b/pandas/sparse/tests/test_series.py
@@ -5,7 +5,6 @@
from numpy import nan
import numpy as np
import pandas as pd
-from numpy.testing import assert_equal
from pandas import Series, DataFrame, bdate_range
from pandas.core.datetools import BDay
@@ -148,20 +147,23 @@ def test_series_density(self):
def test_sparse_to_dense(self):
arr, index = _test_data1()
series = self.bseries.to_dense()
- assert_equal(series, arr)
+ tm.assert_series_equal(series, Series(arr, name='bseries'))
series = self.bseries.to_dense(sparse_only=True)
- assert_equal(series, arr[np.isfinite(arr)])
+
+ indexer = np.isfinite(arr)
+ exp = Series(arr[indexer], index=index[indexer], name='bseries')
+ tm.assert_series_equal(series, exp)
series = self.iseries.to_dense()
- assert_equal(series, arr)
+ tm.assert_series_equal(series, Series(arr, name='iseries'))
arr, index = _test_data1_zero()
series = self.zbseries.to_dense()
- assert_equal(series, arr)
+ tm.assert_series_equal(series, Series(arr, name='zbseries'))
series = self.ziseries.to_dense()
- assert_equal(series, arr)
+ tm.assert_series_equal(series, Series(arr))
def test_to_dense_fill_value(self):
s = pd.Series([1, np.nan, np.nan, 3, np.nan])
@@ -225,8 +227,8 @@ def test_constructor(self):
tm.assertIsInstance(self.iseries.sp_index, IntIndex)
self.assertEqual(self.zbseries.fill_value, 0)
- assert_equal(self.zbseries.values.values,
- self.bseries.to_dense().fillna(0).values)
+ tm.assert_numpy_array_equal(self.zbseries.values.values,
+ self.bseries.to_dense().fillna(0).values)
# pass SparseSeries
def _check_const(sparse, name):
@@ -252,7 +254,7 @@ def _check_const(sparse, name):
# pass Series
bseries2 = SparseSeries(self.bseries.to_dense())
- assert_equal(self.bseries.sp_values, bseries2.sp_values)
+ tm.assert_numpy_array_equal(self.bseries.sp_values, bseries2.sp_values)
# pass dict?
@@ -292,7 +294,7 @@ def test_constructor_ndarray(self):
def test_constructor_nonnan(self):
arr = [0, 0, 0, nan, nan]
sp_series = SparseSeries(arr, fill_value=0)
- assert_equal(sp_series.values.values, arr)
+ tm.assert_numpy_array_equal(sp_series.values.values, arr)
self.assertEqual(len(sp_series), 5)
self.assertEqual(sp_series.shape, (5, ))
@@ -1049,8 +1051,8 @@ def _check_results_to_coo(results, check):
# or compare directly as difference of sparse
# assert(abs(A - A_result).max() < 1e-12) # max is failing in python
# 2.6
- assert_equal(il, il_result)
- assert_equal(jl, jl_result)
+ tm.assert_numpy_array_equal(il, il_result)
+ tm.assert_numpy_array_equal(jl, jl_result)
def test_concat(self):
val1 = np.array([1, 2, np.nan, np.nan, 0, np.nan])
diff --git a/pandas/stats/tests/test_ols.py b/pandas/stats/tests/test_ols.py
index 725a4e8296dd2..4932ac8ffdf99 100644
--- a/pandas/stats/tests/test_ols.py
+++ b/pandas/stats/tests/test_ols.py
@@ -13,7 +13,6 @@
from distutils.version import LooseVersion
import nose
import numpy as np
-from numpy.testing.decorators import slow
from pandas import date_range, bdate_range
from pandas.core.panel import Panel
@@ -22,7 +21,7 @@
from pandas.stats.ols import _filter_data
from pandas.stats.plm import NonPooledPanelOLS, PanelOLS
from pandas.util.testing import (assert_almost_equal, assert_series_equal,
- assert_frame_equal, assertRaisesRegexp)
+ assert_frame_equal, assertRaisesRegexp, slow)
import pandas.util.testing as tm
import pandas.compat as compat
from .common import BaseTest
diff --git a/pandas/stats/tests/test_var.py b/pandas/stats/tests/test_var.py
index 9bcd070dc1d33..9f2c95a2d3d5c 100644
--- a/pandas/stats/tests/test_var.py
+++ b/pandas/stats/tests/test_var.py
@@ -1,9 +1,8 @@
# flake8: noqa
from __future__ import print_function
-from numpy.testing import run_module_suite, assert_equal, TestCase
-from pandas.util.testing import assert_almost_equal
+import pandas.util.testing as tm
from pandas.compat import range
import nose
@@ -33,53 +32,56 @@
class CheckVAR(object):
def test_params(self):
- assert_almost_equal(self.res1.params, self.res2.params, DECIMAL_3)
+ tm.assert_almost_equal(self.res1.params, self.res2.params, DECIMAL_3)
def test_neqs(self):
- assert_equal(self.res1.neqs, self.res2.neqs)
+ tm.assert_numpy_array_equal(self.res1.neqs, self.res2.neqs)
def test_nobs(self):
- assert_equal(self.res1.avobs, self.res2.nobs)
+ tm.assert_numpy_array_equal(self.res1.avobs, self.res2.nobs)
def test_df_eq(self):
- assert_equal(self.res1.df_eq, self.res2.df_eq)
+ tm.assert_numpy_array_equal(self.res1.df_eq, self.res2.df_eq)
def test_rmse(self):
results = self.res1.results
for i in range(len(results)):
- assert_almost_equal(results[i].mse_resid ** .5,
- eval('self.res2.rmse_' + str(i + 1)), DECIMAL_6)
+ tm.assert_almost_equal(results[i].mse_resid ** .5,
+ eval('self.res2.rmse_' + str(i + 1)),
+ DECIMAL_6)
def test_rsquared(self):
results = self.res1.results
for i in range(len(results)):
- assert_almost_equal(results[i].rsquared,
- eval('self.res2.rsquared_' + str(i + 1)), DECIMAL_3)
+ tm.assert_almost_equal(results[i].rsquared,
+ eval('self.res2.rsquared_' + str(i + 1)),
+ DECIMAL_3)
def test_llf(self):
results = self.res1.results
- assert_almost_equal(self.res1.llf, self.res2.llf, DECIMAL_2)
+ tm.assert_almost_equal(self.res1.llf, self.res2.llf, DECIMAL_2)
for i in range(len(results)):
- assert_almost_equal(results[i].llf,
- eval('self.res2.llf_' + str(i + 1)), DECIMAL_2)
+ tm.assert_almost_equal(results[i].llf,
+ eval('self.res2.llf_' + str(i + 1)),
+ DECIMAL_2)
def test_aic(self):
- assert_almost_equal(self.res1.aic, self.res2.aic)
+ tm.assert_almost_equal(self.res1.aic, self.res2.aic)
def test_bic(self):
- assert_almost_equal(self.res1.bic, self.res2.bic)
+ tm.assert_almost_equal(self.res1.bic, self.res2.bic)
def test_hqic(self):
- assert_almost_equal(self.res1.hqic, self.res2.hqic)
+ tm.assert_almost_equal(self.res1.hqic, self.res2.hqic)
def test_fpe(self):
- assert_almost_equal(self.res1.fpe, self.res2.fpe)
+ tm.assert_almost_equal(self.res1.fpe, self.res2.fpe)
def test_detsig(self):
- assert_almost_equal(self.res1.detomega, self.res2.detsig)
+ tm.assert_almost_equal(self.res1.detomega, self.res2.detsig)
def test_bse(self):
- assert_almost_equal(self.res1.bse, self.res2.bse, DECIMAL_4)
+ tm.assert_almost_equal(self.res1.bse, self.res2.bse, DECIMAL_4)
class Foo(object):
diff --git a/pandas/tests/frame/test_misc_api.py b/pandas/tests/frame/test_misc_api.py
index 0857d23dc1176..48b8d641a0f98 100644
--- a/pandas/tests/frame/test_misc_api.py
+++ b/pandas/tests/frame/test_misc_api.py
@@ -391,7 +391,7 @@ def test_repr_with_mi_nat(self):
index=[[pd.NaT, pd.Timestamp('20130101')], ['a', 'b']])
res = repr(df)
exp = ' X\nNaT a 1\n2013-01-01 b 2'
- nose.tools.assert_equal(res, exp)
+ self.assertEqual(res, exp)
def test_iterkv_deprecation(self):
with tm.assert_produces_warning(FutureWarning):
diff --git a/pandas/tests/frame/test_repr_info.py b/pandas/tests/frame/test_repr_info.py
index 3d4be319092c3..66e592c013fb1 100644
--- a/pandas/tests/frame/test_repr_info.py
+++ b/pandas/tests/frame/test_repr_info.py
@@ -14,7 +14,6 @@
import pandas.formats.format as fmt
import pandas as pd
-from numpy.testing.decorators import slow
import pandas.util.testing as tm
from pandas.tests.frame.common import TestData
@@ -43,7 +42,7 @@ def test_repr_mixed(self):
foo = repr(self.mixed_frame) # noqa
self.mixed_frame.info(verbose=False, buf=buf)
- @slow
+ @tm.slow
def test_repr_mixed_big(self):
# big mixed
biggie = DataFrame({'A': np.random.randn(200),
@@ -90,7 +89,7 @@ def test_repr_dimensions(self):
with option_context('display.show_dimensions', 'truncate'):
self.assertFalse("2 rows x 2 columns" in repr(df))
- @slow
+ @tm.slow
def test_repr_big(self):
# big one
biggie = DataFrame(np.zeros((200, 4)), columns=lrange(4),
diff --git a/pandas/tests/frame/test_to_csv.py b/pandas/tests/frame/test_to_csv.py
index 718f47eea3a0f..9a16714e18be3 100644
--- a/pandas/tests/frame/test_to_csv.py
+++ b/pandas/tests/frame/test_to_csv.py
@@ -14,14 +14,11 @@
import pandas as pd
from pandas.util.testing import (assert_almost_equal,
- assert_equal,
assert_series_equal,
assert_frame_equal,
ensure_clean,
makeCustomDataframe as mkdf,
- assertRaisesRegexp)
-
-from numpy.testing.decorators import slow
+ assertRaisesRegexp, slow)
import pandas.util.testing as tm
from pandas.tests.frame.common import TestData
@@ -453,7 +450,7 @@ def test_to_csv_with_mix_columns(self):
df = DataFrame({0: ['a', 'b', 'c'],
1: ['aa', 'bb', 'cc']})
df['test'] = 'txt'
- assert_equal(df.to_csv(), df.to_csv(columns=[0, 1, 'test']))
+ self.assertEqual(df.to_csv(), df.to_csv(columns=[0, 1, 'test']))
def test_to_csv_headers(self):
# GH6186, the presence or absence of `index` incorrectly
@@ -508,8 +505,7 @@ def test_to_csv_multiindex(self):
# do not load index
tsframe.to_csv(path)
recons = DataFrame.from_csv(path, index_col=None)
- np.testing.assert_equal(
- len(recons.columns), len(tsframe.columns) + 2)
+ self.assertEqual(len(recons.columns), len(tsframe.columns) + 2)
# no index
tsframe.to_csv(path, index=False)
diff --git a/pandas/tests/indexes/test_numeric.py b/pandas/tests/indexes/test_numeric.py
index abb9d55e27758..1247e4dc62997 100644
--- a/pandas/tests/indexes/test_numeric.py
+++ b/pandas/tests/indexes/test_numeric.py
@@ -358,7 +358,7 @@ def test_astype_from_object(self):
index = Index([1.0, np.nan, 0.2], dtype='object')
result = index.astype(float)
expected = Float64Index([1.0, np.nan, 0.2])
- tm.assert_equal(result.dtype, expected.dtype)
+ self.assertEqual(result.dtype, expected.dtype)
tm.assert_index_equal(result, expected)
def test_fillna_float64(self):
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index 708006a9dc21b..e1fd17f0c26e0 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -20,14 +20,14 @@
MultiIndex, Timestamp, Timedelta)
from pandas.util.testing import (assert_almost_equal, assert_series_equal,
assert_frame_equal, assert_panel_equal,
- assert_attr_equal)
+ assert_attr_equal, slow)
from pandas.formats.printing import pprint_thing
from pandas import concat, lib
from pandas.core.common import PerformanceWarning
import pandas.util.testing as tm
from pandas import date_range
-from numpy.testing.decorators import slow
+
_verbose = False
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index 878a639a25aa5..34aaccb6464aa 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -1356,7 +1356,7 @@ def test_searchsorted_numeric_dtypes_scalar(self):
s = Series([1, 2, 90, 1000, 3e9])
r = s.searchsorted(30)
e = 2
- tm.assert_equal(r, e)
+ self.assertEqual(r, e)
r = s.searchsorted([30])
e = np.array([2], dtype=np.int64)
@@ -1373,7 +1373,7 @@ def test_search_sorted_datetime64_scalar(self):
v = pd.Timestamp('20120102')
r = s.searchsorted(v)
e = 1
- tm.assert_equal(r, e)
+ self.assertEqual(r, e)
def test_search_sorted_datetime64_list(self):
s = Series(pd.date_range('20120101', periods=10, freq='2D'))
diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py
index 5a0d079efb4c2..d74fe68617ea2 100644
--- a/pandas/tests/test_categorical.py
+++ b/pandas/tests/test_categorical.py
@@ -1420,7 +1420,7 @@ def test_sort_values_na_position(self):
def test_slicing_directly(self):
cat = Categorical(["a", "b", "c", "d", "a", "b", "c"])
sliced = cat[3]
- tm.assert_equal(sliced, "d")
+ self.assertEqual(sliced, "d")
sliced = cat[3:5]
expected = Categorical(["d", "a"], categories=['a', 'b', 'c', 'd'])
self.assert_numpy_array_equal(sliced._codes, expected._codes)
diff --git a/pandas/tests/test_expressions.py b/pandas/tests/test_expressions.py
index 044272f24a21f..b6ed5dc68f905 100644
--- a/pandas/tests/test_expressions.py
+++ b/pandas/tests/test_expressions.py
@@ -15,10 +15,10 @@
from pandas import compat
from pandas.util.testing import (assert_almost_equal, assert_series_equal,
assert_frame_equal, assert_panel_equal,
- assert_panel4d_equal)
+ assert_panel4d_equal, slow)
from pandas.formats.printing import pprint_thing
import pandas.util.testing as tm
-from numpy.testing.decorators import slow
+
if not expr._USE_NUMEXPR:
try:
diff --git a/pandas/tests/test_generic.py b/pandas/tests/test_generic.py
index 794b5e8aa5650..36962a37ec898 100644
--- a/pandas/tests/test_generic.py
+++ b/pandas/tests/test_generic.py
@@ -21,8 +21,7 @@
assert_frame_equal,
assert_panel_equal,
assert_panel4d_equal,
- assert_almost_equal,
- assert_equal)
+ assert_almost_equal)
import pandas.util.testing as tm
@@ -1346,7 +1345,7 @@ def test_set_attribute(self):
df['y'] = [2, 4, 6]
df.y = 5
- assert_equal(df.y, 5)
+ self.assertEqual(df.y, 5)
assert_series_equal(df['y'], Series([2, 4, 6], name='y'))
def test_pct_change(self):
diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py
index 3820a9d5f6476..b59d6ac0027dd 100644
--- a/pandas/tests/test_graphics.py
+++ b/pandas/tests/test_graphics.py
@@ -19,7 +19,7 @@
import pandas.core.common as com
import pandas.util.testing as tm
from pandas.util.testing import (ensure_clean,
- assert_is_valid_plot_return_object)
+ assert_is_valid_plot_return_object, slow)
from pandas.core.config import set_option
@@ -27,8 +27,6 @@
from numpy import random
from numpy.random import rand, randn
-from numpy.testing import assert_allclose
-from numpy.testing.decorators import slow
import pandas.tools.plotting as plotting
"""
These tests are for ``Dataframe.plot`` and ``Series.plot``.
@@ -140,7 +138,7 @@ def _check_data(self, xp, rs):
def check_line(xpl, rsl):
xpdata = xpl.get_xydata()
rsdata = rsl.get_xydata()
- assert_allclose(xpdata, rsdata)
+ tm.assert_almost_equal(xpdata, rsdata)
self.assertEqual(len(xp_lines), len(rs_lines))
[check_line(xpl, rsl) for xpl, rsl in zip(xp_lines, rs_lines)]
diff --git a/pandas/tests/test_graphics_others.py b/pandas/tests/test_graphics_others.py
index b032ce196c113..7285d84865542 100644
--- a/pandas/tests/test_graphics_others.py
+++ b/pandas/tests/test_graphics_others.py
@@ -11,12 +11,12 @@
from pandas import Series, DataFrame, MultiIndex
from pandas.compat import range, lmap, lzip
import pandas.util.testing as tm
+from pandas.util.testing import slow
import numpy as np
from numpy import random
from numpy.random import randn
-from numpy.testing.decorators import slow
import pandas.tools.plotting as plotting
from pandas.tests.test_graphics import (TestPlotBase, _check_plot_works,
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index 38e6a066d3eea..1996d132e01ba 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -31,7 +31,6 @@
import pandas.util.testing as tm
import pandas as pd
-from numpy.testing import assert_equal
class TestGroupBy(tm.TestCase):
@@ -4621,10 +4620,10 @@ def test_timezone_info(self):
import pytz
df = pd.DataFrame({'a': [1], 'b': [datetime.now(pytz.utc)]})
- tm.assert_equal(df['b'][0].tzinfo, pytz.utc)
+ self.assertEqual(df['b'][0].tzinfo, pytz.utc)
df = pd.DataFrame({'a': [1, 2, 3]})
df['b'] = datetime.now(pytz.utc)
- tm.assert_equal(df['b'][0].tzinfo, pytz.utc)
+ self.assertEqual(df['b'][0].tzinfo, pytz.utc)
def test_groupby_with_timegrouper(self):
# GH 4161
@@ -5855,24 +5854,24 @@ def test_lexsort_indexer(self):
# orders=True, na_position='last'
result = _lexsort_indexer(keys, orders=True, na_position='last')
expected = list(range(5, 105)) + list(range(5)) + list(range(105, 110))
- assert_equal(result, expected)
+ tm.assert_numpy_array_equal(result, expected)
# orders=True, na_position='first'
result = _lexsort_indexer(keys, orders=True, na_position='first')
expected = list(range(5)) + list(range(105, 110)) + list(range(5, 105))
- assert_equal(result, expected)
+ tm.assert_numpy_array_equal(result, expected)
# orders=False, na_position='last'
result = _lexsort_indexer(keys, orders=False, na_position='last')
expected = list(range(104, 4, -1)) + list(range(5)) + list(range(105,
110))
- assert_equal(result, expected)
+ tm.assert_numpy_array_equal(result, expected)
# orders=False, na_position='first'
result = _lexsort_indexer(keys, orders=False, na_position='first')
expected = list(range(5)) + list(range(105, 110)) + list(range(104, 4,
-1))
- assert_equal(result, expected)
+ tm.assert_numpy_array_equal(result, expected)
def test_nargsort(self):
# np.argsort(items) places NaNs last
@@ -5899,53 +5898,53 @@ def test_nargsort(self):
result = _nargsort(items, kind='mergesort', ascending=True,
na_position='last')
expected = list(range(5, 105)) + list(range(5)) + list(range(105, 110))
- assert_equal(result, expected)
+ tm.assert_numpy_array_equal(result, expected)
# mergesort, ascending=True, na_position='first'
result = _nargsort(items, kind='mergesort', ascending=True,
na_position='first')
expected = list(range(5)) + list(range(105, 110)) + list(range(5, 105))
- assert_equal(result, expected)
+ tm.assert_numpy_array_equal(result, expected)
# mergesort, ascending=False, na_position='last'
result = _nargsort(items, kind='mergesort', ascending=False,
na_position='last')
expected = list(range(104, 4, -1)) + list(range(5)) + list(range(105,
110))
- assert_equal(result, expected)
+ tm.assert_numpy_array_equal(result, expected)
# mergesort, ascending=False, na_position='first'
result = _nargsort(items, kind='mergesort', ascending=False,
na_position='first')
expected = list(range(5)) + list(range(105, 110)) + list(range(104, 4,
-1))
- assert_equal(result, expected)
+ tm.assert_numpy_array_equal(result, expected)
# mergesort, ascending=True, na_position='last'
result = _nargsort(items2, kind='mergesort', ascending=True,
na_position='last')
expected = list(range(5, 105)) + list(range(5)) + list(range(105, 110))
- assert_equal(result, expected)
+ tm.assert_numpy_array_equal(result, expected)
# mergesort, ascending=True, na_position='first'
result = _nargsort(items2, kind='mergesort', ascending=True,
na_position='first')
expected = list(range(5)) + list(range(105, 110)) + list(range(5, 105))
- assert_equal(result, expected)
+ tm.assert_numpy_array_equal(result, expected)
# mergesort, ascending=False, na_position='last'
result = _nargsort(items2, kind='mergesort', ascending=False,
na_position='last')
expected = list(range(104, 4, -1)) + list(range(5)) + list(range(105,
110))
- assert_equal(result, expected)
+ tm.assert_numpy_array_equal(result, expected)
# mergesort, ascending=False, na_position='first'
result = _nargsort(items2, kind='mergesort', ascending=False,
na_position='first')
expected = list(range(5)) + list(range(105, 110)) + list(range(104, 4,
-1))
- assert_equal(result, expected)
+ tm.assert_numpy_array_equal(result, expected)
def test_datetime_count(self):
df = DataFrame({'a': [1, 2, 3] * 2,
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index d33a64002c3b1..7f8fb8fa424d1 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -873,17 +873,15 @@ def test_ground_truth(self):
for axis in range(2):
for ddof in range(3):
var = nanops.nanvar(samples, skipna=True, axis=axis, ddof=ddof)
- np.testing.assert_array_almost_equal(var[:3],
- variance[axis, ddof])
- np.testing.assert_equal(var[3], np.nan)
+ tm.assert_almost_equal(var[:3], variance[axis, ddof])
+ self.assertTrue(np.isnan(var[3]))
# Test nanstd.
for axis in range(2):
for ddof in range(3):
std = nanops.nanstd(samples, skipna=True, axis=axis, ddof=ddof)
- np.testing.assert_array_almost_equal(
- std[:3], variance[axis, ddof] ** 0.5)
- np.testing.assert_equal(std[3], np.nan)
+ tm.assert_almost_equal(std[:3], variance[axis, ddof] ** 0.5)
+ self.assertTrue(np.isnan(std[3]))
def test_nanstd_roundoff(self):
# Regression test for GH 10242 (test data taken from GH 10489). Ensure
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index 05525acedc245..423a288077c4d 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -573,7 +573,7 @@ def test_extract_expand_False(self):
# single group renames series/index properly
s_or_idx = klass(['A1', 'A2'])
result = s_or_idx.str.extract(r'(?P<uno>A)\d', expand=False)
- tm.assert_equal(result.name, 'uno')
+ self.assertEqual(result.name, 'uno')
tm.assert_numpy_array_equal(result, klass(['A', 'A']))
s = Series(['A1', 'B2', 'C3'])
@@ -1105,7 +1105,7 @@ def test_empty_str_methods(self):
# (extract) on empty series
tm.assert_series_equal(empty_str, empty.str.cat(empty))
- tm.assert_equal('', empty.str.cat())
+ self.assertEqual('', empty.str.cat())
tm.assert_series_equal(empty_str, empty.str.title())
tm.assert_series_equal(empty_int, empty.str.count('a'))
tm.assert_series_equal(empty_bool, empty.str.contains('a'))
diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py
index 8d9a55bade30d..1185f95dbd51f 100644
--- a/pandas/tests/test_window.py
+++ b/pandas/tests/test_window.py
@@ -6,7 +6,6 @@
from nose.tools import assert_raises
from datetime import datetime
from numpy.random import randn
-from numpy.testing.decorators import slow
import numpy as np
from distutils.version import LooseVersion
@@ -15,7 +14,8 @@
notnull, concat)
from pandas.util.testing import (assert_almost_equal, assert_series_equal,
assert_frame_equal, assert_panel_equal,
- assert_index_equal, assert_numpy_array_equal)
+ assert_index_equal, assert_numpy_array_equal,
+ slow)
import pandas.core.datetools as datetools
import pandas.stats.moments as mom
import pandas.core.window as rwindow
diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py
index 13f00afb5a489..474ce0f899217 100644
--- a/pandas/tools/tests/test_merge.py
+++ b/pandas/tools/tests/test_merge.py
@@ -17,12 +17,12 @@
from pandas.util.testing import (assert_frame_equal, assert_series_equal,
assert_almost_equal,
makeCustomDataframe as mkdf,
- assertRaisesRegexp)
+ assertRaisesRegexp, slow)
from pandas import (isnull, DataFrame, Index, MultiIndex, Panel,
Series, date_range, read_csv)
import pandas.algos as algos
import pandas.util.testing as tm
-from numpy.testing.decorators import slow
+
a_ = np.array
diff --git a/pandas/tools/tests/test_pivot.py b/pandas/tools/tests/test_pivot.py
index 5ebd2e4f693cf..82feaae13f771 100644
--- a/pandas/tools/tests/test_pivot.py
+++ b/pandas/tools/tests/test_pivot.py
@@ -1,13 +1,12 @@
from datetime import datetime, date, timedelta
import numpy as np
-from numpy.testing import assert_equal
import pandas as pd
from pandas import DataFrame, Series, Index, MultiIndex, Grouper
from pandas.tools.merge import concat
from pandas.tools.pivot import pivot_table, crosstab
-from pandas.compat import range, u, product
+from pandas.compat import range, product
import pandas.util.testing as tm
@@ -80,21 +79,13 @@ def test_pivot_table_dropna(self):
pv_ind = df.pivot_table(
'quantity', ['customer', 'product'], 'month', dropna=False)
- m = MultiIndex.from_tuples([(u('A'), u('a')),
- (u('A'), u('b')),
- (u('A'), u('c')),
- (u('A'), u('d')),
- (u('B'), u('a')),
- (u('B'), u('b')),
- (u('B'), u('c')),
- (u('B'), u('d')),
- (u('C'), u('a')),
- (u('C'), u('b')),
- (u('C'), u('c')),
- (u('C'), u('d'))])
-
- assert_equal(pv_col.columns.values, m.values)
- assert_equal(pv_ind.index.values, m.values)
+ m = MultiIndex.from_tuples([('A', 'a'), ('A', 'b'), ('A', 'c'),
+ ('A', 'd'), ('B', 'a'), ('B', 'b'),
+ ('B', 'c'), ('B', 'd'), ('C', 'a'),
+ ('C', 'b'), ('C', 'c'), ('C', 'd')],
+ names=['customer', 'product'])
+ tm.assert_index_equal(pv_col.columns, m)
+ tm.assert_index_equal(pv_ind.index, m)
def test_pass_array(self):
result = self.data.pivot_table(
@@ -902,8 +893,9 @@ def test_crosstab_dropna(self):
res = pd.crosstab(a, [b, c], rownames=['a'],
colnames=['b', 'c'], dropna=False)
m = MultiIndex.from_tuples([('one', 'dull'), ('one', 'shiny'),
- ('two', 'dull'), ('two', 'shiny')])
- assert_equal(res.columns.values, m.values)
+ ('two', 'dull'), ('two', 'shiny')],
+ names=['b', 'c'])
+ tm.assert_index_equal(res.columns, m)
def test_categorical_margins(self):
# GH 10989
diff --git a/pandas/tools/tests/test_util.py b/pandas/tools/tests/test_util.py
index 1c4f55b2defa4..92a41199f264d 100644
--- a/pandas/tools/tests/test_util.py
+++ b/pandas/tools/tests/test_util.py
@@ -4,7 +4,6 @@
import nose
import numpy as np
-from numpy.testing import assert_equal
import pandas as pd
from pandas import date_range, Index
@@ -22,7 +21,7 @@ def test_simple(self):
result = cartesian_product([x, y])
expected = [np.array(['A', 'A', 'B', 'B', 'C', 'C']),
np.array([1, 22, 1, 22, 1, 22])]
- assert_equal(result, expected)
+ tm.assert_numpy_array_equal(result, expected)
def test_datetimeindex(self):
# regression test for GitHub issue #6439
@@ -30,7 +29,7 @@ def test_datetimeindex(self):
x = date_range('2000-01-01', periods=2)
result = [Index(y).day for y in cartesian_product([x, x])]
expected = [np.array([1, 1, 2, 2]), np.array([1, 2, 1, 2])]
- assert_equal(result, expected)
+ tm.assert_numpy_array_equal(result, expected)
class TestLocaleUtils(tm.TestCase):
diff --git a/pandas/tseries/tests/test_converter.py b/pandas/tseries/tests/test_converter.py
index f2c20f7d3111d..ceb8660efb9cd 100644
--- a/pandas/tseries/tests/test_converter.py
+++ b/pandas/tseries/tests/test_converter.py
@@ -3,7 +3,6 @@
import nose
import numpy as np
-from numpy.testing import assert_almost_equal as np_assert_almost_equal
from pandas import Timestamp, Period
from pandas.compat import u
import pandas.util.testing as tm
@@ -69,14 +68,14 @@ def test_conversion_float(self):
rs = self.dtc.convert(
Timestamp('2012-1-1 01:02:03', tz='UTC'), None, None)
xp = converter.dates.date2num(Timestamp('2012-1-1 01:02:03', tz='UTC'))
- np_assert_almost_equal(rs, xp, decimals)
+ tm.assert_almost_equal(rs, xp, decimals)
rs = self.dtc.convert(
Timestamp('2012-1-1 09:02:03', tz='Asia/Hong_Kong'), None, None)
- np_assert_almost_equal(rs, xp, decimals)
+ tm.assert_almost_equal(rs, xp, decimals)
rs = self.dtc.convert(datetime(2012, 1, 1, 1, 2, 3), None, None)
- np_assert_almost_equal(rs, xp, decimals)
+ tm.assert_almost_equal(rs, xp, decimals)
def test_time_formatter(self):
self.tc(90000)
@@ -88,7 +87,7 @@ def test_dateindex_conversion(self):
dateindex = tm.makeDateIndex(k=10, freq=freq)
rs = self.dtc.convert(dateindex, None, None)
xp = converter.dates.date2num(dateindex._mpl_repr())
- np_assert_almost_equal(rs, xp, decimals)
+ tm.assert_almost_equal(rs, xp, decimals)
def test_resolution(self):
def _assert_less(ts1, ts2):
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index 167690e4846e9..b0df824f0a832 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -8,8 +8,6 @@
from datetime import datetime, date, timedelta
-from numpy.ma.testutils import assert_equal
-
from pandas import Timestamp
from pandas.tseries.frequencies import MONTHS, DAYS, _period_code_map
from pandas.tseries.period import Period, PeriodIndex, period_range
@@ -625,7 +623,7 @@ def _ex(*args):
def test_properties_annually(self):
# Test properties on Periods with annually frequency.
a_date = Period(freq='A', year=2007)
- assert_equal(a_date.year, 2007)
+ self.assertEqual(a_date.year, 2007)
def test_properties_quarterly(self):
# Test properties on Periods with daily frequency.
@@ -635,78 +633,78 @@ def test_properties_quarterly(self):
#
for x in range(3):
for qd in (qedec_date, qejan_date, qejun_date):
- assert_equal((qd + x).qyear, 2007)
- assert_equal((qd + x).quarter, x + 1)
+ self.assertEqual((qd + x).qyear, 2007)
+ self.assertEqual((qd + x).quarter, x + 1)
def test_properties_monthly(self):
# Test properties on Periods with daily frequency.
m_date = Period(freq='M', year=2007, month=1)
for x in range(11):
m_ival_x = m_date + x
- assert_equal(m_ival_x.year, 2007)
+ self.assertEqual(m_ival_x.year, 2007)
if 1 <= x + 1 <= 3:
- assert_equal(m_ival_x.quarter, 1)
+ self.assertEqual(m_ival_x.quarter, 1)
elif 4 <= x + 1 <= 6:
- assert_equal(m_ival_x.quarter, 2)
+ self.assertEqual(m_ival_x.quarter, 2)
elif 7 <= x + 1 <= 9:
- assert_equal(m_ival_x.quarter, 3)
+ self.assertEqual(m_ival_x.quarter, 3)
elif 10 <= x + 1 <= 12:
- assert_equal(m_ival_x.quarter, 4)
- assert_equal(m_ival_x.month, x + 1)
+ self.assertEqual(m_ival_x.quarter, 4)
+ self.assertEqual(m_ival_x.month, x + 1)
def test_properties_weekly(self):
# Test properties on Periods with daily frequency.
w_date = Period(freq='W', year=2007, month=1, day=7)
#
- assert_equal(w_date.year, 2007)
- assert_equal(w_date.quarter, 1)
- assert_equal(w_date.month, 1)
- assert_equal(w_date.week, 1)
- assert_equal((w_date - 1).week, 52)
- assert_equal(w_date.days_in_month, 31)
- assert_equal(Period(freq='W', year=2012,
- month=2, day=1).days_in_month, 29)
+ self.assertEqual(w_date.year, 2007)
+ self.assertEqual(w_date.quarter, 1)
+ self.assertEqual(w_date.month, 1)
+ self.assertEqual(w_date.week, 1)
+ self.assertEqual((w_date - 1).week, 52)
+ self.assertEqual(w_date.days_in_month, 31)
+ self.assertEqual(Period(freq='W', year=2012,
+ month=2, day=1).days_in_month, 29)
def test_properties_weekly_legacy(self):
# Test properties on Periods with daily frequency.
with tm.assert_produces_warning(FutureWarning):
w_date = Period(freq='WK', year=2007, month=1, day=7)
#
- assert_equal(w_date.year, 2007)
- assert_equal(w_date.quarter, 1)
- assert_equal(w_date.month, 1)
- assert_equal(w_date.week, 1)
- assert_equal((w_date - 1).week, 52)
- assert_equal(w_date.days_in_month, 31)
+ self.assertEqual(w_date.year, 2007)
+ self.assertEqual(w_date.quarter, 1)
+ self.assertEqual(w_date.month, 1)
+ self.assertEqual(w_date.week, 1)
+ self.assertEqual((w_date - 1).week, 52)
+ self.assertEqual(w_date.days_in_month, 31)
with tm.assert_produces_warning(FutureWarning):
exp = Period(freq='WK', year=2012, month=2, day=1)
- assert_equal(exp.days_in_month, 29)
+ self.assertEqual(exp.days_in_month, 29)
def test_properties_daily(self):
# Test properties on Periods with daily frequency.
b_date = Period(freq='B', year=2007, month=1, day=1)
#
- assert_equal(b_date.year, 2007)
- assert_equal(b_date.quarter, 1)
- assert_equal(b_date.month, 1)
- assert_equal(b_date.day, 1)
- assert_equal(b_date.weekday, 0)
- assert_equal(b_date.dayofyear, 1)
- assert_equal(b_date.days_in_month, 31)
- assert_equal(Period(freq='B', year=2012,
- month=2, day=1).days_in_month, 29)
+ self.assertEqual(b_date.year, 2007)
+ self.assertEqual(b_date.quarter, 1)
+ self.assertEqual(b_date.month, 1)
+ self.assertEqual(b_date.day, 1)
+ self.assertEqual(b_date.weekday, 0)
+ self.assertEqual(b_date.dayofyear, 1)
+ self.assertEqual(b_date.days_in_month, 31)
+ self.assertEqual(Period(freq='B', year=2012,
+ month=2, day=1).days_in_month, 29)
#
d_date = Period(freq='D', year=2007, month=1, day=1)
#
- assert_equal(d_date.year, 2007)
- assert_equal(d_date.quarter, 1)
- assert_equal(d_date.month, 1)
- assert_equal(d_date.day, 1)
- assert_equal(d_date.weekday, 0)
- assert_equal(d_date.dayofyear, 1)
- assert_equal(d_date.days_in_month, 31)
- assert_equal(Period(freq='D', year=2012, month=2,
- day=1).days_in_month, 29)
+ self.assertEqual(d_date.year, 2007)
+ self.assertEqual(d_date.quarter, 1)
+ self.assertEqual(d_date.month, 1)
+ self.assertEqual(d_date.day, 1)
+ self.assertEqual(d_date.weekday, 0)
+ self.assertEqual(d_date.dayofyear, 1)
+ self.assertEqual(d_date.days_in_month, 31)
+ self.assertEqual(Period(freq='D', year=2012, month=2,
+ day=1).days_in_month, 29)
def test_properties_hourly(self):
# Test properties on Periods with hourly frequency.
@@ -714,50 +712,50 @@ def test_properties_hourly(self):
h_date2 = Period(freq='2H', year=2007, month=1, day=1, hour=0)
for h_date in [h_date1, h_date2]:
- assert_equal(h_date.year, 2007)
- assert_equal(h_date.quarter, 1)
- assert_equal(h_date.month, 1)
- assert_equal(h_date.day, 1)
- assert_equal(h_date.weekday, 0)
- assert_equal(h_date.dayofyear, 1)
- assert_equal(h_date.hour, 0)
- assert_equal(h_date.days_in_month, 31)
- assert_equal(Period(freq='H', year=2012, month=2, day=1,
- hour=0).days_in_month, 29)
+ self.assertEqual(h_date.year, 2007)
+ self.assertEqual(h_date.quarter, 1)
+ self.assertEqual(h_date.month, 1)
+ self.assertEqual(h_date.day, 1)
+ self.assertEqual(h_date.weekday, 0)
+ self.assertEqual(h_date.dayofyear, 1)
+ self.assertEqual(h_date.hour, 0)
+ self.assertEqual(h_date.days_in_month, 31)
+ self.assertEqual(Period(freq='H', year=2012, month=2, day=1,
+ hour=0).days_in_month, 29)
def test_properties_minutely(self):
# Test properties on Periods with minutely frequency.
t_date = Period(freq='Min', year=2007, month=1, day=1, hour=0,
minute=0)
#
- assert_equal(t_date.quarter, 1)
- assert_equal(t_date.month, 1)
- assert_equal(t_date.day, 1)
- assert_equal(t_date.weekday, 0)
- assert_equal(t_date.dayofyear, 1)
- assert_equal(t_date.hour, 0)
- assert_equal(t_date.minute, 0)
- assert_equal(t_date.days_in_month, 31)
- assert_equal(Period(freq='D', year=2012, month=2, day=1, hour=0,
- minute=0).days_in_month, 29)
+ self.assertEqual(t_date.quarter, 1)
+ self.assertEqual(t_date.month, 1)
+ self.assertEqual(t_date.day, 1)
+ self.assertEqual(t_date.weekday, 0)
+ self.assertEqual(t_date.dayofyear, 1)
+ self.assertEqual(t_date.hour, 0)
+ self.assertEqual(t_date.minute, 0)
+ self.assertEqual(t_date.days_in_month, 31)
+ self.assertEqual(Period(freq='D', year=2012, month=2, day=1, hour=0,
+ minute=0).days_in_month, 29)
def test_properties_secondly(self):
# Test properties on Periods with secondly frequency.
s_date = Period(freq='Min', year=2007, month=1, day=1, hour=0,
minute=0, second=0)
#
- assert_equal(s_date.year, 2007)
- assert_equal(s_date.quarter, 1)
- assert_equal(s_date.month, 1)
- assert_equal(s_date.day, 1)
- assert_equal(s_date.weekday, 0)
- assert_equal(s_date.dayofyear, 1)
- assert_equal(s_date.hour, 0)
- assert_equal(s_date.minute, 0)
- assert_equal(s_date.second, 0)
- assert_equal(s_date.days_in_month, 31)
- assert_equal(Period(freq='Min', year=2012, month=2, day=1, hour=0,
- minute=0, second=0).days_in_month, 29)
+ self.assertEqual(s_date.year, 2007)
+ self.assertEqual(s_date.quarter, 1)
+ self.assertEqual(s_date.month, 1)
+ self.assertEqual(s_date.day, 1)
+ self.assertEqual(s_date.weekday, 0)
+ self.assertEqual(s_date.dayofyear, 1)
+ self.assertEqual(s_date.hour, 0)
+ self.assertEqual(s_date.minute, 0)
+ self.assertEqual(s_date.second, 0)
+ self.assertEqual(s_date.days_in_month, 31)
+ self.assertEqual(Period(freq='Min', year=2012, month=2, day=1, hour=0,
+ minute=0, second=0).days_in_month, 29)
def test_properties_nat(self):
p_nat = Period('NaT', freq='M')
@@ -894,35 +892,35 @@ def test_conv_annual(self):
ival_ANOV_to_D_end = Period(freq='D', year=2007, month=11, day=30)
ival_ANOV_to_D_start = Period(freq='D', year=2006, month=12, day=1)
- assert_equal(ival_A.asfreq('Q', 'S'), ival_A_to_Q_start)
- assert_equal(ival_A.asfreq('Q', 'e'), ival_A_to_Q_end)
- assert_equal(ival_A.asfreq('M', 's'), ival_A_to_M_start)
- assert_equal(ival_A.asfreq('M', 'E'), ival_A_to_M_end)
- assert_equal(ival_A.asfreq('W', 'S'), ival_A_to_W_start)
- assert_equal(ival_A.asfreq('W', 'E'), ival_A_to_W_end)
- assert_equal(ival_A.asfreq('B', 'S'), ival_A_to_B_start)
- assert_equal(ival_A.asfreq('B', 'E'), ival_A_to_B_end)
- assert_equal(ival_A.asfreq('D', 'S'), ival_A_to_D_start)
- assert_equal(ival_A.asfreq('D', 'E'), ival_A_to_D_end)
- assert_equal(ival_A.asfreq('H', 'S'), ival_A_to_H_start)
- assert_equal(ival_A.asfreq('H', 'E'), ival_A_to_H_end)
- assert_equal(ival_A.asfreq('min', 'S'), ival_A_to_T_start)
- assert_equal(ival_A.asfreq('min', 'E'), ival_A_to_T_end)
- assert_equal(ival_A.asfreq('T', 'S'), ival_A_to_T_start)
- assert_equal(ival_A.asfreq('T', 'E'), ival_A_to_T_end)
- assert_equal(ival_A.asfreq('S', 'S'), ival_A_to_S_start)
- assert_equal(ival_A.asfreq('S', 'E'), ival_A_to_S_end)
-
- assert_equal(ival_AJAN.asfreq('D', 'S'), ival_AJAN_to_D_start)
- assert_equal(ival_AJAN.asfreq('D', 'E'), ival_AJAN_to_D_end)
-
- assert_equal(ival_AJUN.asfreq('D', 'S'), ival_AJUN_to_D_start)
- assert_equal(ival_AJUN.asfreq('D', 'E'), ival_AJUN_to_D_end)
-
- assert_equal(ival_ANOV.asfreq('D', 'S'), ival_ANOV_to_D_start)
- assert_equal(ival_ANOV.asfreq('D', 'E'), ival_ANOV_to_D_end)
-
- assert_equal(ival_A.asfreq('A'), ival_A)
+ self.assertEqual(ival_A.asfreq('Q', 'S'), ival_A_to_Q_start)
+ self.assertEqual(ival_A.asfreq('Q', 'e'), ival_A_to_Q_end)
+ self.assertEqual(ival_A.asfreq('M', 's'), ival_A_to_M_start)
+ self.assertEqual(ival_A.asfreq('M', 'E'), ival_A_to_M_end)
+ self.assertEqual(ival_A.asfreq('W', 'S'), ival_A_to_W_start)
+ self.assertEqual(ival_A.asfreq('W', 'E'), ival_A_to_W_end)
+ self.assertEqual(ival_A.asfreq('B', 'S'), ival_A_to_B_start)
+ self.assertEqual(ival_A.asfreq('B', 'E'), ival_A_to_B_end)
+ self.assertEqual(ival_A.asfreq('D', 'S'), ival_A_to_D_start)
+ self.assertEqual(ival_A.asfreq('D', 'E'), ival_A_to_D_end)
+ self.assertEqual(ival_A.asfreq('H', 'S'), ival_A_to_H_start)
+ self.assertEqual(ival_A.asfreq('H', 'E'), ival_A_to_H_end)
+ self.assertEqual(ival_A.asfreq('min', 'S'), ival_A_to_T_start)
+ self.assertEqual(ival_A.asfreq('min', 'E'), ival_A_to_T_end)
+ self.assertEqual(ival_A.asfreq('T', 'S'), ival_A_to_T_start)
+ self.assertEqual(ival_A.asfreq('T', 'E'), ival_A_to_T_end)
+ self.assertEqual(ival_A.asfreq('S', 'S'), ival_A_to_S_start)
+ self.assertEqual(ival_A.asfreq('S', 'E'), ival_A_to_S_end)
+
+ self.assertEqual(ival_AJAN.asfreq('D', 'S'), ival_AJAN_to_D_start)
+ self.assertEqual(ival_AJAN.asfreq('D', 'E'), ival_AJAN_to_D_end)
+
+ self.assertEqual(ival_AJUN.asfreq('D', 'S'), ival_AJUN_to_D_start)
+ self.assertEqual(ival_AJUN.asfreq('D', 'E'), ival_AJUN_to_D_end)
+
+ self.assertEqual(ival_ANOV.asfreq('D', 'S'), ival_ANOV_to_D_start)
+ self.assertEqual(ival_ANOV.asfreq('D', 'E'), ival_ANOV_to_D_end)
+
+ self.assertEqual(ival_A.asfreq('A'), ival_A)
def test_conv_quarterly(self):
# frequency conversion tests: from Quarterly Frequency
@@ -959,30 +957,30 @@ def test_conv_quarterly(self):
ival_QEJUN_to_D_start = Period(freq='D', year=2006, month=7, day=1)
ival_QEJUN_to_D_end = Period(freq='D', year=2006, month=9, day=30)
- assert_equal(ival_Q.asfreq('A'), ival_Q_to_A)
- assert_equal(ival_Q_end_of_year.asfreq('A'), ival_Q_to_A)
-
- assert_equal(ival_Q.asfreq('M', 'S'), ival_Q_to_M_start)
- assert_equal(ival_Q.asfreq('M', 'E'), ival_Q_to_M_end)
- assert_equal(ival_Q.asfreq('W', 'S'), ival_Q_to_W_start)
- assert_equal(ival_Q.asfreq('W', 'E'), ival_Q_to_W_end)
- assert_equal(ival_Q.asfreq('B', 'S'), ival_Q_to_B_start)
- assert_equal(ival_Q.asfreq('B', 'E'), ival_Q_to_B_end)
- assert_equal(ival_Q.asfreq('D', 'S'), ival_Q_to_D_start)
- assert_equal(ival_Q.asfreq('D', 'E'), ival_Q_to_D_end)
- assert_equal(ival_Q.asfreq('H', 'S'), ival_Q_to_H_start)
- assert_equal(ival_Q.asfreq('H', 'E'), ival_Q_to_H_end)
- assert_equal(ival_Q.asfreq('Min', 'S'), ival_Q_to_T_start)
- assert_equal(ival_Q.asfreq('Min', 'E'), ival_Q_to_T_end)
- assert_equal(ival_Q.asfreq('S', 'S'), ival_Q_to_S_start)
- assert_equal(ival_Q.asfreq('S', 'E'), ival_Q_to_S_end)
-
- assert_equal(ival_QEJAN.asfreq('D', 'S'), ival_QEJAN_to_D_start)
- assert_equal(ival_QEJAN.asfreq('D', 'E'), ival_QEJAN_to_D_end)
- assert_equal(ival_QEJUN.asfreq('D', 'S'), ival_QEJUN_to_D_start)
- assert_equal(ival_QEJUN.asfreq('D', 'E'), ival_QEJUN_to_D_end)
-
- assert_equal(ival_Q.asfreq('Q'), ival_Q)
+ self.assertEqual(ival_Q.asfreq('A'), ival_Q_to_A)
+ self.assertEqual(ival_Q_end_of_year.asfreq('A'), ival_Q_to_A)
+
+ self.assertEqual(ival_Q.asfreq('M', 'S'), ival_Q_to_M_start)
+ self.assertEqual(ival_Q.asfreq('M', 'E'), ival_Q_to_M_end)
+ self.assertEqual(ival_Q.asfreq('W', 'S'), ival_Q_to_W_start)
+ self.assertEqual(ival_Q.asfreq('W', 'E'), ival_Q_to_W_end)
+ self.assertEqual(ival_Q.asfreq('B', 'S'), ival_Q_to_B_start)
+ self.assertEqual(ival_Q.asfreq('B', 'E'), ival_Q_to_B_end)
+ self.assertEqual(ival_Q.asfreq('D', 'S'), ival_Q_to_D_start)
+ self.assertEqual(ival_Q.asfreq('D', 'E'), ival_Q_to_D_end)
+ self.assertEqual(ival_Q.asfreq('H', 'S'), ival_Q_to_H_start)
+ self.assertEqual(ival_Q.asfreq('H', 'E'), ival_Q_to_H_end)
+ self.assertEqual(ival_Q.asfreq('Min', 'S'), ival_Q_to_T_start)
+ self.assertEqual(ival_Q.asfreq('Min', 'E'), ival_Q_to_T_end)
+ self.assertEqual(ival_Q.asfreq('S', 'S'), ival_Q_to_S_start)
+ self.assertEqual(ival_Q.asfreq('S', 'E'), ival_Q_to_S_end)
+
+ self.assertEqual(ival_QEJAN.asfreq('D', 'S'), ival_QEJAN_to_D_start)
+ self.assertEqual(ival_QEJAN.asfreq('D', 'E'), ival_QEJAN_to_D_end)
+ self.assertEqual(ival_QEJUN.asfreq('D', 'S'), ival_QEJUN_to_D_start)
+ self.assertEqual(ival_QEJUN.asfreq('D', 'E'), ival_QEJUN_to_D_end)
+
+ self.assertEqual(ival_Q.asfreq('Q'), ival_Q)
def test_conv_monthly(self):
# frequency conversion tests: from Monthly Frequency
@@ -1009,25 +1007,25 @@ def test_conv_monthly(self):
ival_M_to_S_end = Period(freq='S', year=2007, month=1, day=31, hour=23,
minute=59, second=59)
- assert_equal(ival_M.asfreq('A'), ival_M_to_A)
- assert_equal(ival_M_end_of_year.asfreq('A'), ival_M_to_A)
- assert_equal(ival_M.asfreq('Q'), ival_M_to_Q)
- assert_equal(ival_M_end_of_quarter.asfreq('Q'), ival_M_to_Q)
-
- assert_equal(ival_M.asfreq('W', 'S'), ival_M_to_W_start)
- assert_equal(ival_M.asfreq('W', 'E'), ival_M_to_W_end)
- assert_equal(ival_M.asfreq('B', 'S'), ival_M_to_B_start)
- assert_equal(ival_M.asfreq('B', 'E'), ival_M_to_B_end)
- assert_equal(ival_M.asfreq('D', 'S'), ival_M_to_D_start)
- assert_equal(ival_M.asfreq('D', 'E'), ival_M_to_D_end)
- assert_equal(ival_M.asfreq('H', 'S'), ival_M_to_H_start)
- assert_equal(ival_M.asfreq('H', 'E'), ival_M_to_H_end)
- assert_equal(ival_M.asfreq('Min', 'S'), ival_M_to_T_start)
- assert_equal(ival_M.asfreq('Min', 'E'), ival_M_to_T_end)
- assert_equal(ival_M.asfreq('S', 'S'), ival_M_to_S_start)
- assert_equal(ival_M.asfreq('S', 'E'), ival_M_to_S_end)
-
- assert_equal(ival_M.asfreq('M'), ival_M)
+ self.assertEqual(ival_M.asfreq('A'), ival_M_to_A)
+ self.assertEqual(ival_M_end_of_year.asfreq('A'), ival_M_to_A)
+ self.assertEqual(ival_M.asfreq('Q'), ival_M_to_Q)
+ self.assertEqual(ival_M_end_of_quarter.asfreq('Q'), ival_M_to_Q)
+
+ self.assertEqual(ival_M.asfreq('W', 'S'), ival_M_to_W_start)
+ self.assertEqual(ival_M.asfreq('W', 'E'), ival_M_to_W_end)
+ self.assertEqual(ival_M.asfreq('B', 'S'), ival_M_to_B_start)
+ self.assertEqual(ival_M.asfreq('B', 'E'), ival_M_to_B_end)
+ self.assertEqual(ival_M.asfreq('D', 'S'), ival_M_to_D_start)
+ self.assertEqual(ival_M.asfreq('D', 'E'), ival_M_to_D_end)
+ self.assertEqual(ival_M.asfreq('H', 'S'), ival_M_to_H_start)
+ self.assertEqual(ival_M.asfreq('H', 'E'), ival_M_to_H_end)
+ self.assertEqual(ival_M.asfreq('Min', 'S'), ival_M_to_T_start)
+ self.assertEqual(ival_M.asfreq('Min', 'E'), ival_M_to_T_end)
+ self.assertEqual(ival_M.asfreq('S', 'S'), ival_M_to_S_start)
+ self.assertEqual(ival_M.asfreq('S', 'E'), ival_M_to_S_end)
+
+ self.assertEqual(ival_M.asfreq('M'), ival_M)
def test_conv_weekly(self):
# frequency conversion tests: from Weekly Frequency
@@ -1093,43 +1091,45 @@ def test_conv_weekly(self):
ival_W_to_S_end = Period(freq='S', year=2007, month=1, day=7, hour=23,
minute=59, second=59)
- assert_equal(ival_W.asfreq('A'), ival_W_to_A)
- assert_equal(ival_W_end_of_year.asfreq('A'), ival_W_to_A_end_of_year)
- assert_equal(ival_W.asfreq('Q'), ival_W_to_Q)
- assert_equal(ival_W_end_of_quarter.asfreq('Q'),
- ival_W_to_Q_end_of_quarter)
- assert_equal(ival_W.asfreq('M'), ival_W_to_M)
- assert_equal(ival_W_end_of_month.asfreq('M'), ival_W_to_M_end_of_month)
-
- assert_equal(ival_W.asfreq('B', 'S'), ival_W_to_B_start)
- assert_equal(ival_W.asfreq('B', 'E'), ival_W_to_B_end)
-
- assert_equal(ival_W.asfreq('D', 'S'), ival_W_to_D_start)
- assert_equal(ival_W.asfreq('D', 'E'), ival_W_to_D_end)
-
- assert_equal(ival_WSUN.asfreq('D', 'S'), ival_WSUN_to_D_start)
- assert_equal(ival_WSUN.asfreq('D', 'E'), ival_WSUN_to_D_end)
- assert_equal(ival_WSAT.asfreq('D', 'S'), ival_WSAT_to_D_start)
- assert_equal(ival_WSAT.asfreq('D', 'E'), ival_WSAT_to_D_end)
- assert_equal(ival_WFRI.asfreq('D', 'S'), ival_WFRI_to_D_start)
- assert_equal(ival_WFRI.asfreq('D', 'E'), ival_WFRI_to_D_end)
- assert_equal(ival_WTHU.asfreq('D', 'S'), ival_WTHU_to_D_start)
- assert_equal(ival_WTHU.asfreq('D', 'E'), ival_WTHU_to_D_end)
- assert_equal(ival_WWED.asfreq('D', 'S'), ival_WWED_to_D_start)
- assert_equal(ival_WWED.asfreq('D', 'E'), ival_WWED_to_D_end)
- assert_equal(ival_WTUE.asfreq('D', 'S'), ival_WTUE_to_D_start)
- assert_equal(ival_WTUE.asfreq('D', 'E'), ival_WTUE_to_D_end)
- assert_equal(ival_WMON.asfreq('D', 'S'), ival_WMON_to_D_start)
- assert_equal(ival_WMON.asfreq('D', 'E'), ival_WMON_to_D_end)
-
- assert_equal(ival_W.asfreq('H', 'S'), ival_W_to_H_start)
- assert_equal(ival_W.asfreq('H', 'E'), ival_W_to_H_end)
- assert_equal(ival_W.asfreq('Min', 'S'), ival_W_to_T_start)
- assert_equal(ival_W.asfreq('Min', 'E'), ival_W_to_T_end)
- assert_equal(ival_W.asfreq('S', 'S'), ival_W_to_S_start)
- assert_equal(ival_W.asfreq('S', 'E'), ival_W_to_S_end)
-
- assert_equal(ival_W.asfreq('W'), ival_W)
+ self.assertEqual(ival_W.asfreq('A'), ival_W_to_A)
+ self.assertEqual(ival_W_end_of_year.asfreq('A'),
+ ival_W_to_A_end_of_year)
+ self.assertEqual(ival_W.asfreq('Q'), ival_W_to_Q)
+ self.assertEqual(ival_W_end_of_quarter.asfreq('Q'),
+ ival_W_to_Q_end_of_quarter)
+ self.assertEqual(ival_W.asfreq('M'), ival_W_to_M)
+ self.assertEqual(ival_W_end_of_month.asfreq('M'),
+ ival_W_to_M_end_of_month)
+
+ self.assertEqual(ival_W.asfreq('B', 'S'), ival_W_to_B_start)
+ self.assertEqual(ival_W.asfreq('B', 'E'), ival_W_to_B_end)
+
+ self.assertEqual(ival_W.asfreq('D', 'S'), ival_W_to_D_start)
+ self.assertEqual(ival_W.asfreq('D', 'E'), ival_W_to_D_end)
+
+ self.assertEqual(ival_WSUN.asfreq('D', 'S'), ival_WSUN_to_D_start)
+ self.assertEqual(ival_WSUN.asfreq('D', 'E'), ival_WSUN_to_D_end)
+ self.assertEqual(ival_WSAT.asfreq('D', 'S'), ival_WSAT_to_D_start)
+ self.assertEqual(ival_WSAT.asfreq('D', 'E'), ival_WSAT_to_D_end)
+ self.assertEqual(ival_WFRI.asfreq('D', 'S'), ival_WFRI_to_D_start)
+ self.assertEqual(ival_WFRI.asfreq('D', 'E'), ival_WFRI_to_D_end)
+ self.assertEqual(ival_WTHU.asfreq('D', 'S'), ival_WTHU_to_D_start)
+ self.assertEqual(ival_WTHU.asfreq('D', 'E'), ival_WTHU_to_D_end)
+ self.assertEqual(ival_WWED.asfreq('D', 'S'), ival_WWED_to_D_start)
+ self.assertEqual(ival_WWED.asfreq('D', 'E'), ival_WWED_to_D_end)
+ self.assertEqual(ival_WTUE.asfreq('D', 'S'), ival_WTUE_to_D_start)
+ self.assertEqual(ival_WTUE.asfreq('D', 'E'), ival_WTUE_to_D_end)
+ self.assertEqual(ival_WMON.asfreq('D', 'S'), ival_WMON_to_D_start)
+ self.assertEqual(ival_WMON.asfreq('D', 'E'), ival_WMON_to_D_end)
+
+ self.assertEqual(ival_W.asfreq('H', 'S'), ival_W_to_H_start)
+ self.assertEqual(ival_W.asfreq('H', 'E'), ival_W_to_H_end)
+ self.assertEqual(ival_W.asfreq('Min', 'S'), ival_W_to_T_start)
+ self.assertEqual(ival_W.asfreq('Min', 'E'), ival_W_to_T_end)
+ self.assertEqual(ival_W.asfreq('S', 'S'), ival_W_to_S_start)
+ self.assertEqual(ival_W.asfreq('S', 'E'), ival_W_to_S_end)
+
+ self.assertEqual(ival_W.asfreq('W'), ival_W)
def test_conv_weekly_legacy(self):
# frequency conversion tests: from Weekly Frequency
@@ -1208,44 +1208,46 @@ def test_conv_weekly_legacy(self):
ival_W_to_S_end = Period(freq='S', year=2007, month=1, day=7, hour=23,
minute=59, second=59)
- assert_equal(ival_W.asfreq('A'), ival_W_to_A)
- assert_equal(ival_W_end_of_year.asfreq('A'), ival_W_to_A_end_of_year)
- assert_equal(ival_W.asfreq('Q'), ival_W_to_Q)
- assert_equal(ival_W_end_of_quarter.asfreq('Q'),
- ival_W_to_Q_end_of_quarter)
- assert_equal(ival_W.asfreq('M'), ival_W_to_M)
- assert_equal(ival_W_end_of_month.asfreq('M'), ival_W_to_M_end_of_month)
-
- assert_equal(ival_W.asfreq('B', 'S'), ival_W_to_B_start)
- assert_equal(ival_W.asfreq('B', 'E'), ival_W_to_B_end)
-
- assert_equal(ival_W.asfreq('D', 'S'), ival_W_to_D_start)
- assert_equal(ival_W.asfreq('D', 'E'), ival_W_to_D_end)
-
- assert_equal(ival_WSUN.asfreq('D', 'S'), ival_WSUN_to_D_start)
- assert_equal(ival_WSUN.asfreq('D', 'E'), ival_WSUN_to_D_end)
- assert_equal(ival_WSAT.asfreq('D', 'S'), ival_WSAT_to_D_start)
- assert_equal(ival_WSAT.asfreq('D', 'E'), ival_WSAT_to_D_end)
- assert_equal(ival_WFRI.asfreq('D', 'S'), ival_WFRI_to_D_start)
- assert_equal(ival_WFRI.asfreq('D', 'E'), ival_WFRI_to_D_end)
- assert_equal(ival_WTHU.asfreq('D', 'S'), ival_WTHU_to_D_start)
- assert_equal(ival_WTHU.asfreq('D', 'E'), ival_WTHU_to_D_end)
- assert_equal(ival_WWED.asfreq('D', 'S'), ival_WWED_to_D_start)
- assert_equal(ival_WWED.asfreq('D', 'E'), ival_WWED_to_D_end)
- assert_equal(ival_WTUE.asfreq('D', 'S'), ival_WTUE_to_D_start)
- assert_equal(ival_WTUE.asfreq('D', 'E'), ival_WTUE_to_D_end)
- assert_equal(ival_WMON.asfreq('D', 'S'), ival_WMON_to_D_start)
- assert_equal(ival_WMON.asfreq('D', 'E'), ival_WMON_to_D_end)
-
- assert_equal(ival_W.asfreq('H', 'S'), ival_W_to_H_start)
- assert_equal(ival_W.asfreq('H', 'E'), ival_W_to_H_end)
- assert_equal(ival_W.asfreq('Min', 'S'), ival_W_to_T_start)
- assert_equal(ival_W.asfreq('Min', 'E'), ival_W_to_T_end)
- assert_equal(ival_W.asfreq('S', 'S'), ival_W_to_S_start)
- assert_equal(ival_W.asfreq('S', 'E'), ival_W_to_S_end)
+ self.assertEqual(ival_W.asfreq('A'), ival_W_to_A)
+ self.assertEqual(ival_W_end_of_year.asfreq('A'),
+ ival_W_to_A_end_of_year)
+ self.assertEqual(ival_W.asfreq('Q'), ival_W_to_Q)
+ self.assertEqual(ival_W_end_of_quarter.asfreq('Q'),
+ ival_W_to_Q_end_of_quarter)
+ self.assertEqual(ival_W.asfreq('M'), ival_W_to_M)
+ self.assertEqual(ival_W_end_of_month.asfreq('M'),
+ ival_W_to_M_end_of_month)
+
+ self.assertEqual(ival_W.asfreq('B', 'S'), ival_W_to_B_start)
+ self.assertEqual(ival_W.asfreq('B', 'E'), ival_W_to_B_end)
+
+ self.assertEqual(ival_W.asfreq('D', 'S'), ival_W_to_D_start)
+ self.assertEqual(ival_W.asfreq('D', 'E'), ival_W_to_D_end)
+
+ self.assertEqual(ival_WSUN.asfreq('D', 'S'), ival_WSUN_to_D_start)
+ self.assertEqual(ival_WSUN.asfreq('D', 'E'), ival_WSUN_to_D_end)
+ self.assertEqual(ival_WSAT.asfreq('D', 'S'), ival_WSAT_to_D_start)
+ self.assertEqual(ival_WSAT.asfreq('D', 'E'), ival_WSAT_to_D_end)
+ self.assertEqual(ival_WFRI.asfreq('D', 'S'), ival_WFRI_to_D_start)
+ self.assertEqual(ival_WFRI.asfreq('D', 'E'), ival_WFRI_to_D_end)
+ self.assertEqual(ival_WTHU.asfreq('D', 'S'), ival_WTHU_to_D_start)
+ self.assertEqual(ival_WTHU.asfreq('D', 'E'), ival_WTHU_to_D_end)
+ self.assertEqual(ival_WWED.asfreq('D', 'S'), ival_WWED_to_D_start)
+ self.assertEqual(ival_WWED.asfreq('D', 'E'), ival_WWED_to_D_end)
+ self.assertEqual(ival_WTUE.asfreq('D', 'S'), ival_WTUE_to_D_start)
+ self.assertEqual(ival_WTUE.asfreq('D', 'E'), ival_WTUE_to_D_end)
+ self.assertEqual(ival_WMON.asfreq('D', 'S'), ival_WMON_to_D_start)
+ self.assertEqual(ival_WMON.asfreq('D', 'E'), ival_WMON_to_D_end)
+
+ self.assertEqual(ival_W.asfreq('H', 'S'), ival_W_to_H_start)
+ self.assertEqual(ival_W.asfreq('H', 'E'), ival_W_to_H_end)
+ self.assertEqual(ival_W.asfreq('Min', 'S'), ival_W_to_T_start)
+ self.assertEqual(ival_W.asfreq('Min', 'E'), ival_W_to_T_end)
+ self.assertEqual(ival_W.asfreq('S', 'S'), ival_W_to_S_start)
+ self.assertEqual(ival_W.asfreq('S', 'E'), ival_W_to_S_end)
with tm.assert_produces_warning(FutureWarning):
- assert_equal(ival_W.asfreq('WK'), ival_W)
+ self.assertEqual(ival_W.asfreq('WK'), ival_W)
def test_conv_business(self):
# frequency conversion tests: from Business Frequency"
@@ -1272,25 +1274,25 @@ def test_conv_business(self):
ival_B_to_S_end = Period(freq='S', year=2007, month=1, day=1, hour=23,
minute=59, second=59)
- assert_equal(ival_B.asfreq('A'), ival_B_to_A)
- assert_equal(ival_B_end_of_year.asfreq('A'), ival_B_to_A)
- assert_equal(ival_B.asfreq('Q'), ival_B_to_Q)
- assert_equal(ival_B_end_of_quarter.asfreq('Q'), ival_B_to_Q)
- assert_equal(ival_B.asfreq('M'), ival_B_to_M)
- assert_equal(ival_B_end_of_month.asfreq('M'), ival_B_to_M)
- assert_equal(ival_B.asfreq('W'), ival_B_to_W)
- assert_equal(ival_B_end_of_week.asfreq('W'), ival_B_to_W)
+ self.assertEqual(ival_B.asfreq('A'), ival_B_to_A)
+ self.assertEqual(ival_B_end_of_year.asfreq('A'), ival_B_to_A)
+ self.assertEqual(ival_B.asfreq('Q'), ival_B_to_Q)
+ self.assertEqual(ival_B_end_of_quarter.asfreq('Q'), ival_B_to_Q)
+ self.assertEqual(ival_B.asfreq('M'), ival_B_to_M)
+ self.assertEqual(ival_B_end_of_month.asfreq('M'), ival_B_to_M)
+ self.assertEqual(ival_B.asfreq('W'), ival_B_to_W)
+ self.assertEqual(ival_B_end_of_week.asfreq('W'), ival_B_to_W)
- assert_equal(ival_B.asfreq('D'), ival_B_to_D)
+ self.assertEqual(ival_B.asfreq('D'), ival_B_to_D)
- assert_equal(ival_B.asfreq('H', 'S'), ival_B_to_H_start)
- assert_equal(ival_B.asfreq('H', 'E'), ival_B_to_H_end)
- assert_equal(ival_B.asfreq('Min', 'S'), ival_B_to_T_start)
- assert_equal(ival_B.asfreq('Min', 'E'), ival_B_to_T_end)
- assert_equal(ival_B.asfreq('S', 'S'), ival_B_to_S_start)
- assert_equal(ival_B.asfreq('S', 'E'), ival_B_to_S_end)
+ self.assertEqual(ival_B.asfreq('H', 'S'), ival_B_to_H_start)
+ self.assertEqual(ival_B.asfreq('H', 'E'), ival_B_to_H_end)
+ self.assertEqual(ival_B.asfreq('Min', 'S'), ival_B_to_T_start)
+ self.assertEqual(ival_B.asfreq('Min', 'E'), ival_B_to_T_end)
+ self.assertEqual(ival_B.asfreq('S', 'S'), ival_B_to_S_start)
+ self.assertEqual(ival_B.asfreq('S', 'E'), ival_B_to_S_end)
- assert_equal(ival_B.asfreq('B'), ival_B)
+ self.assertEqual(ival_B.asfreq('B'), ival_B)
def test_conv_daily(self):
# frequency conversion tests: from Business Frequency"
@@ -1335,36 +1337,39 @@ def test_conv_daily(self):
ival_D_to_S_end = Period(freq='S', year=2007, month=1, day=1, hour=23,
minute=59, second=59)
- assert_equal(ival_D.asfreq('A'), ival_D_to_A)
-
- assert_equal(ival_D_end_of_quarter.asfreq('A-JAN'), ival_Deoq_to_AJAN)
- assert_equal(ival_D_end_of_quarter.asfreq('A-JUN'), ival_Deoq_to_AJUN)
- assert_equal(ival_D_end_of_quarter.asfreq('A-DEC'), ival_Deoq_to_ADEC)
-
- assert_equal(ival_D_end_of_year.asfreq('A'), ival_D_to_A)
- assert_equal(ival_D_end_of_quarter.asfreq('Q'), ival_D_to_QEDEC)
- assert_equal(ival_D.asfreq("Q-JAN"), ival_D_to_QEJAN)
- assert_equal(ival_D.asfreq("Q-JUN"), ival_D_to_QEJUN)
- assert_equal(ival_D.asfreq("Q-DEC"), ival_D_to_QEDEC)
- assert_equal(ival_D.asfreq('M'), ival_D_to_M)
- assert_equal(ival_D_end_of_month.asfreq('M'), ival_D_to_M)
- assert_equal(ival_D.asfreq('W'), ival_D_to_W)
- assert_equal(ival_D_end_of_week.asfreq('W'), ival_D_to_W)
-
- assert_equal(ival_D_friday.asfreq('B'), ival_B_friday)
- assert_equal(ival_D_saturday.asfreq('B', 'S'), ival_B_friday)
- assert_equal(ival_D_saturday.asfreq('B', 'E'), ival_B_monday)
- assert_equal(ival_D_sunday.asfreq('B', 'S'), ival_B_friday)
- assert_equal(ival_D_sunday.asfreq('B', 'E'), ival_B_monday)
-
- assert_equal(ival_D.asfreq('H', 'S'), ival_D_to_H_start)
- assert_equal(ival_D.asfreq('H', 'E'), ival_D_to_H_end)
- assert_equal(ival_D.asfreq('Min', 'S'), ival_D_to_T_start)
- assert_equal(ival_D.asfreq('Min', 'E'), ival_D_to_T_end)
- assert_equal(ival_D.asfreq('S', 'S'), ival_D_to_S_start)
- assert_equal(ival_D.asfreq('S', 'E'), ival_D_to_S_end)
-
- assert_equal(ival_D.asfreq('D'), ival_D)
+ self.assertEqual(ival_D.asfreq('A'), ival_D_to_A)
+
+ self.assertEqual(ival_D_end_of_quarter.asfreq('A-JAN'),
+ ival_Deoq_to_AJAN)
+ self.assertEqual(ival_D_end_of_quarter.asfreq('A-JUN'),
+ ival_Deoq_to_AJUN)
+ self.assertEqual(ival_D_end_of_quarter.asfreq('A-DEC'),
+ ival_Deoq_to_ADEC)
+
+ self.assertEqual(ival_D_end_of_year.asfreq('A'), ival_D_to_A)
+ self.assertEqual(ival_D_end_of_quarter.asfreq('Q'), ival_D_to_QEDEC)
+ self.assertEqual(ival_D.asfreq("Q-JAN"), ival_D_to_QEJAN)
+ self.assertEqual(ival_D.asfreq("Q-JUN"), ival_D_to_QEJUN)
+ self.assertEqual(ival_D.asfreq("Q-DEC"), ival_D_to_QEDEC)
+ self.assertEqual(ival_D.asfreq('M'), ival_D_to_M)
+ self.assertEqual(ival_D_end_of_month.asfreq('M'), ival_D_to_M)
+ self.assertEqual(ival_D.asfreq('W'), ival_D_to_W)
+ self.assertEqual(ival_D_end_of_week.asfreq('W'), ival_D_to_W)
+
+ self.assertEqual(ival_D_friday.asfreq('B'), ival_B_friday)
+ self.assertEqual(ival_D_saturday.asfreq('B', 'S'), ival_B_friday)
+ self.assertEqual(ival_D_saturday.asfreq('B', 'E'), ival_B_monday)
+ self.assertEqual(ival_D_sunday.asfreq('B', 'S'), ival_B_friday)
+ self.assertEqual(ival_D_sunday.asfreq('B', 'E'), ival_B_monday)
+
+ self.assertEqual(ival_D.asfreq('H', 'S'), ival_D_to_H_start)
+ self.assertEqual(ival_D.asfreq('H', 'E'), ival_D_to_H_end)
+ self.assertEqual(ival_D.asfreq('Min', 'S'), ival_D_to_T_start)
+ self.assertEqual(ival_D.asfreq('Min', 'E'), ival_D_to_T_end)
+ self.assertEqual(ival_D.asfreq('S', 'S'), ival_D_to_S_start)
+ self.assertEqual(ival_D.asfreq('S', 'E'), ival_D_to_S_end)
+
+ self.assertEqual(ival_D.asfreq('D'), ival_D)
def test_conv_hourly(self):
# frequency conversion tests: from Hourly Frequency"
@@ -1399,25 +1404,25 @@ def test_conv_hourly(self):
ival_H_to_S_end = Period(freq='S', year=2007, month=1, day=1, hour=0,
minute=59, second=59)
- assert_equal(ival_H.asfreq('A'), ival_H_to_A)
- assert_equal(ival_H_end_of_year.asfreq('A'), ival_H_to_A)
- assert_equal(ival_H.asfreq('Q'), ival_H_to_Q)
- assert_equal(ival_H_end_of_quarter.asfreq('Q'), ival_H_to_Q)
- assert_equal(ival_H.asfreq('M'), ival_H_to_M)
- assert_equal(ival_H_end_of_month.asfreq('M'), ival_H_to_M)
- assert_equal(ival_H.asfreq('W'), ival_H_to_W)
- assert_equal(ival_H_end_of_week.asfreq('W'), ival_H_to_W)
- assert_equal(ival_H.asfreq('D'), ival_H_to_D)
- assert_equal(ival_H_end_of_day.asfreq('D'), ival_H_to_D)
- assert_equal(ival_H.asfreq('B'), ival_H_to_B)
- assert_equal(ival_H_end_of_bus.asfreq('B'), ival_H_to_B)
-
- assert_equal(ival_H.asfreq('Min', 'S'), ival_H_to_T_start)
- assert_equal(ival_H.asfreq('Min', 'E'), ival_H_to_T_end)
- assert_equal(ival_H.asfreq('S', 'S'), ival_H_to_S_start)
- assert_equal(ival_H.asfreq('S', 'E'), ival_H_to_S_end)
-
- assert_equal(ival_H.asfreq('H'), ival_H)
+ self.assertEqual(ival_H.asfreq('A'), ival_H_to_A)
+ self.assertEqual(ival_H_end_of_year.asfreq('A'), ival_H_to_A)
+ self.assertEqual(ival_H.asfreq('Q'), ival_H_to_Q)
+ self.assertEqual(ival_H_end_of_quarter.asfreq('Q'), ival_H_to_Q)
+ self.assertEqual(ival_H.asfreq('M'), ival_H_to_M)
+ self.assertEqual(ival_H_end_of_month.asfreq('M'), ival_H_to_M)
+ self.assertEqual(ival_H.asfreq('W'), ival_H_to_W)
+ self.assertEqual(ival_H_end_of_week.asfreq('W'), ival_H_to_W)
+ self.assertEqual(ival_H.asfreq('D'), ival_H_to_D)
+ self.assertEqual(ival_H_end_of_day.asfreq('D'), ival_H_to_D)
+ self.assertEqual(ival_H.asfreq('B'), ival_H_to_B)
+ self.assertEqual(ival_H_end_of_bus.asfreq('B'), ival_H_to_B)
+
+ self.assertEqual(ival_H.asfreq('Min', 'S'), ival_H_to_T_start)
+ self.assertEqual(ival_H.asfreq('Min', 'E'), ival_H_to_T_end)
+ self.assertEqual(ival_H.asfreq('S', 'S'), ival_H_to_S_start)
+ self.assertEqual(ival_H.asfreq('S', 'E'), ival_H_to_S_end)
+
+ self.assertEqual(ival_H.asfreq('H'), ival_H)
def test_conv_minutely(self):
# frequency conversion tests: from Minutely Frequency"
@@ -1452,25 +1457,25 @@ def test_conv_minutely(self):
ival_T_to_S_end = Period(freq='S', year=2007, month=1, day=1, hour=0,
minute=0, second=59)
- assert_equal(ival_T.asfreq('A'), ival_T_to_A)
- assert_equal(ival_T_end_of_year.asfreq('A'), ival_T_to_A)
- assert_equal(ival_T.asfreq('Q'), ival_T_to_Q)
- assert_equal(ival_T_end_of_quarter.asfreq('Q'), ival_T_to_Q)
- assert_equal(ival_T.asfreq('M'), ival_T_to_M)
- assert_equal(ival_T_end_of_month.asfreq('M'), ival_T_to_M)
- assert_equal(ival_T.asfreq('W'), ival_T_to_W)
- assert_equal(ival_T_end_of_week.asfreq('W'), ival_T_to_W)
- assert_equal(ival_T.asfreq('D'), ival_T_to_D)
- assert_equal(ival_T_end_of_day.asfreq('D'), ival_T_to_D)
- assert_equal(ival_T.asfreq('B'), ival_T_to_B)
- assert_equal(ival_T_end_of_bus.asfreq('B'), ival_T_to_B)
- assert_equal(ival_T.asfreq('H'), ival_T_to_H)
- assert_equal(ival_T_end_of_hour.asfreq('H'), ival_T_to_H)
-
- assert_equal(ival_T.asfreq('S', 'S'), ival_T_to_S_start)
- assert_equal(ival_T.asfreq('S', 'E'), ival_T_to_S_end)
-
- assert_equal(ival_T.asfreq('Min'), ival_T)
+ self.assertEqual(ival_T.asfreq('A'), ival_T_to_A)
+ self.assertEqual(ival_T_end_of_year.asfreq('A'), ival_T_to_A)
+ self.assertEqual(ival_T.asfreq('Q'), ival_T_to_Q)
+ self.assertEqual(ival_T_end_of_quarter.asfreq('Q'), ival_T_to_Q)
+ self.assertEqual(ival_T.asfreq('M'), ival_T_to_M)
+ self.assertEqual(ival_T_end_of_month.asfreq('M'), ival_T_to_M)
+ self.assertEqual(ival_T.asfreq('W'), ival_T_to_W)
+ self.assertEqual(ival_T_end_of_week.asfreq('W'), ival_T_to_W)
+ self.assertEqual(ival_T.asfreq('D'), ival_T_to_D)
+ self.assertEqual(ival_T_end_of_day.asfreq('D'), ival_T_to_D)
+ self.assertEqual(ival_T.asfreq('B'), ival_T_to_B)
+ self.assertEqual(ival_T_end_of_bus.asfreq('B'), ival_T_to_B)
+ self.assertEqual(ival_T.asfreq('H'), ival_T_to_H)
+ self.assertEqual(ival_T_end_of_hour.asfreq('H'), ival_T_to_H)
+
+ self.assertEqual(ival_T.asfreq('S', 'S'), ival_T_to_S_start)
+ self.assertEqual(ival_T.asfreq('S', 'E'), ival_T_to_S_end)
+
+ self.assertEqual(ival_T.asfreq('Min'), ival_T)
def test_conv_secondly(self):
# frequency conversion tests: from Secondly Frequency"
@@ -1504,24 +1509,24 @@ def test_conv_secondly(self):
ival_S_to_T = Period(freq='Min', year=2007, month=1, day=1, hour=0,
minute=0)
- assert_equal(ival_S.asfreq('A'), ival_S_to_A)
- assert_equal(ival_S_end_of_year.asfreq('A'), ival_S_to_A)
- assert_equal(ival_S.asfreq('Q'), ival_S_to_Q)
- assert_equal(ival_S_end_of_quarter.asfreq('Q'), ival_S_to_Q)
- assert_equal(ival_S.asfreq('M'), ival_S_to_M)
- assert_equal(ival_S_end_of_month.asfreq('M'), ival_S_to_M)
- assert_equal(ival_S.asfreq('W'), ival_S_to_W)
- assert_equal(ival_S_end_of_week.asfreq('W'), ival_S_to_W)
- assert_equal(ival_S.asfreq('D'), ival_S_to_D)
- assert_equal(ival_S_end_of_day.asfreq('D'), ival_S_to_D)
- assert_equal(ival_S.asfreq('B'), ival_S_to_B)
- assert_equal(ival_S_end_of_bus.asfreq('B'), ival_S_to_B)
- assert_equal(ival_S.asfreq('H'), ival_S_to_H)
- assert_equal(ival_S_end_of_hour.asfreq('H'), ival_S_to_H)
- assert_equal(ival_S.asfreq('Min'), ival_S_to_T)
- assert_equal(ival_S_end_of_minute.asfreq('Min'), ival_S_to_T)
-
- assert_equal(ival_S.asfreq('S'), ival_S)
+ self.assertEqual(ival_S.asfreq('A'), ival_S_to_A)
+ self.assertEqual(ival_S_end_of_year.asfreq('A'), ival_S_to_A)
+ self.assertEqual(ival_S.asfreq('Q'), ival_S_to_Q)
+ self.assertEqual(ival_S_end_of_quarter.asfreq('Q'), ival_S_to_Q)
+ self.assertEqual(ival_S.asfreq('M'), ival_S_to_M)
+ self.assertEqual(ival_S_end_of_month.asfreq('M'), ival_S_to_M)
+ self.assertEqual(ival_S.asfreq('W'), ival_S_to_W)
+ self.assertEqual(ival_S_end_of_week.asfreq('W'), ival_S_to_W)
+ self.assertEqual(ival_S.asfreq('D'), ival_S_to_D)
+ self.assertEqual(ival_S_end_of_day.asfreq('D'), ival_S_to_D)
+ self.assertEqual(ival_S.asfreq('B'), ival_S_to_B)
+ self.assertEqual(ival_S_end_of_bus.asfreq('B'), ival_S_to_B)
+ self.assertEqual(ival_S.asfreq('H'), ival_S_to_H)
+ self.assertEqual(ival_S_end_of_hour.asfreq('H'), ival_S_to_H)
+ self.assertEqual(ival_S.asfreq('Min'), ival_S_to_T)
+ self.assertEqual(ival_S_end_of_minute.asfreq('Min'), ival_S_to_T)
+
+ self.assertEqual(ival_S.asfreq('S'), ival_S)
def test_asfreq_nat(self):
p = Period('NaT', freq='A')
@@ -2246,52 +2251,52 @@ def test_index_unique(self):
def test_constructor(self):
pi = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009')
- assert_equal(len(pi), 9)
+ self.assertEqual(len(pi), 9)
pi = PeriodIndex(freq='Q', start='1/1/2001', end='12/1/2009')
- assert_equal(len(pi), 4 * 9)
+ self.assertEqual(len(pi), 4 * 9)
pi = PeriodIndex(freq='M', start='1/1/2001', end='12/1/2009')
- assert_equal(len(pi), 12 * 9)
+ self.assertEqual(len(pi), 12 * 9)
pi = PeriodIndex(freq='D', start='1/1/2001', end='12/31/2009')
- assert_equal(len(pi), 365 * 9 + 2)
+ self.assertEqual(len(pi), 365 * 9 + 2)
pi = PeriodIndex(freq='B', start='1/1/2001', end='12/31/2009')
- assert_equal(len(pi), 261 * 9)
+ self.assertEqual(len(pi), 261 * 9)
pi = PeriodIndex(freq='H', start='1/1/2001', end='12/31/2001 23:00')
- assert_equal(len(pi), 365 * 24)
+ self.assertEqual(len(pi), 365 * 24)
pi = PeriodIndex(freq='Min', start='1/1/2001', end='1/1/2001 23:59')
- assert_equal(len(pi), 24 * 60)
+ self.assertEqual(len(pi), 24 * 60)
pi = PeriodIndex(freq='S', start='1/1/2001', end='1/1/2001 23:59:59')
- assert_equal(len(pi), 24 * 60 * 60)
+ self.assertEqual(len(pi), 24 * 60 * 60)
start = Period('02-Apr-2005', 'B')
i1 = PeriodIndex(start=start, periods=20)
- assert_equal(len(i1), 20)
- assert_equal(i1.freq, start.freq)
- assert_equal(i1[0], start)
+ self.assertEqual(len(i1), 20)
+ self.assertEqual(i1.freq, start.freq)
+ self.assertEqual(i1[0], start)
end_intv = Period('2006-12-31', 'W')
i1 = PeriodIndex(end=end_intv, periods=10)
- assert_equal(len(i1), 10)
- assert_equal(i1.freq, end_intv.freq)
- assert_equal(i1[-1], end_intv)
+ self.assertEqual(len(i1), 10)
+ self.assertEqual(i1.freq, end_intv.freq)
+ self.assertEqual(i1[-1], end_intv)
end_intv = Period('2006-12-31', '1w')
i2 = PeriodIndex(end=end_intv, periods=10)
- assert_equal(len(i1), len(i2))
+ self.assertEqual(len(i1), len(i2))
self.assertTrue((i1 == i2).all())
- assert_equal(i1.freq, i2.freq)
+ self.assertEqual(i1.freq, i2.freq)
end_intv = Period('2006-12-31', ('w', 1))
i2 = PeriodIndex(end=end_intv, periods=10)
- assert_equal(len(i1), len(i2))
+ self.assertEqual(len(i1), len(i2))
self.assertTrue((i1 == i2).all())
- assert_equal(i1.freq, i2.freq)
+ self.assertEqual(i1.freq, i2.freq)
try:
PeriodIndex(start=start, end=end_intv)
@@ -2311,12 +2316,12 @@ def test_constructor(self):
# infer freq from first element
i2 = PeriodIndex([end_intv, Period('2005-05-05', 'B')])
- assert_equal(len(i2), 2)
- assert_equal(i2[0], end_intv)
+ self.assertEqual(len(i2), 2)
+ self.assertEqual(i2[0], end_intv)
i2 = PeriodIndex(np.array([end_intv, Period('2005-05-05', 'B')]))
- assert_equal(len(i2), 2)
- assert_equal(i2[0], end_intv)
+ self.assertEqual(len(i2), 2)
+ self.assertEqual(i2[0], end_intv)
# Mixed freq should fail
vals = [end_intv, Period('2006-12-31', 'w')]
@@ -2352,33 +2357,33 @@ def test_shift(self):
tm.assert_index_equal(pi1.shift(0), pi1)
- assert_equal(len(pi1), len(pi2))
- assert_equal(pi1.shift(1).values, pi2.values)
+ self.assertEqual(len(pi1), len(pi2))
+ self.assert_index_equal(pi1.shift(1), pi2)
pi1 = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009')
pi2 = PeriodIndex(freq='A', start='1/1/2000', end='12/1/2008')
- assert_equal(len(pi1), len(pi2))
- assert_equal(pi1.shift(-1).values, pi2.values)
+ self.assertEqual(len(pi1), len(pi2))
+ self.assert_index_equal(pi1.shift(-1), pi2)
pi1 = PeriodIndex(freq='M', start='1/1/2001', end='12/1/2009')
pi2 = PeriodIndex(freq='M', start='2/1/2001', end='1/1/2010')
- assert_equal(len(pi1), len(pi2))
- assert_equal(pi1.shift(1).values, pi2.values)
+ self.assertEqual(len(pi1), len(pi2))
+ self.assert_index_equal(pi1.shift(1), pi2)
pi1 = PeriodIndex(freq='M', start='1/1/2001', end='12/1/2009')
pi2 = PeriodIndex(freq='M', start='12/1/2000', end='11/1/2009')
- assert_equal(len(pi1), len(pi2))
- assert_equal(pi1.shift(-1).values, pi2.values)
+ self.assertEqual(len(pi1), len(pi2))
+ self.assert_index_equal(pi1.shift(-1), pi2)
pi1 = PeriodIndex(freq='D', start='1/1/2001', end='12/1/2009')
pi2 = PeriodIndex(freq='D', start='1/2/2001', end='12/2/2009')
- assert_equal(len(pi1), len(pi2))
- assert_equal(pi1.shift(1).values, pi2.values)
+ self.assertEqual(len(pi1), len(pi2))
+ self.assert_index_equal(pi1.shift(1), pi2)
pi1 = PeriodIndex(freq='D', start='1/1/2001', end='12/1/2009')
pi2 = PeriodIndex(freq='D', start='12/31/2000', end='11/30/2009')
- assert_equal(len(pi1), len(pi2))
- assert_equal(pi1.shift(-1).values, pi2.values)
+ self.assertEqual(len(pi1), len(pi2))
+ self.assert_index_equal(pi1.shift(-1), pi2)
def test_shift_nat(self):
idx = PeriodIndex(['2011-01', '2011-02', 'NaT',
@@ -2496,37 +2501,37 @@ def test_asfreq_mult_pi(self):
def test_period_index_length(self):
pi = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009')
- assert_equal(len(pi), 9)
+ self.assertEqual(len(pi), 9)
pi = PeriodIndex(freq='Q', start='1/1/2001', end='12/1/2009')
- assert_equal(len(pi), 4 * 9)
+ self.assertEqual(len(pi), 4 * 9)
pi = PeriodIndex(freq='M', start='1/1/2001', end='12/1/2009')
- assert_equal(len(pi), 12 * 9)
+ self.assertEqual(len(pi), 12 * 9)
start = Period('02-Apr-2005', 'B')
i1 = PeriodIndex(start=start, periods=20)
- assert_equal(len(i1), 20)
- assert_equal(i1.freq, start.freq)
- assert_equal(i1[0], start)
+ self.assertEqual(len(i1), 20)
+ self.assertEqual(i1.freq, start.freq)
+ self.assertEqual(i1[0], start)
end_intv = Period('2006-12-31', 'W')
i1 = PeriodIndex(end=end_intv, periods=10)
- assert_equal(len(i1), 10)
- assert_equal(i1.freq, end_intv.freq)
- assert_equal(i1[-1], end_intv)
+ self.assertEqual(len(i1), 10)
+ self.assertEqual(i1.freq, end_intv.freq)
+ self.assertEqual(i1[-1], end_intv)
end_intv = Period('2006-12-31', '1w')
i2 = PeriodIndex(end=end_intv, periods=10)
- assert_equal(len(i1), len(i2))
+ self.assertEqual(len(i1), len(i2))
self.assertTrue((i1 == i2).all())
- assert_equal(i1.freq, i2.freq)
+ self.assertEqual(i1.freq, i2.freq)
end_intv = Period('2006-12-31', ('w', 1))
i2 = PeriodIndex(end=end_intv, periods=10)
- assert_equal(len(i1), len(i2))
+ self.assertEqual(len(i1), len(i2))
self.assertTrue((i1 == i2).all())
- assert_equal(i1.freq, i2.freq)
+ self.assertEqual(i1.freq, i2.freq)
try:
PeriodIndex(start=start, end=end_intv)
@@ -2546,12 +2551,12 @@ def test_period_index_length(self):
# infer freq from first element
i2 = PeriodIndex([end_intv, Period('2005-05-05', 'B')])
- assert_equal(len(i2), 2)
- assert_equal(i2[0], end_intv)
+ self.assertEqual(len(i2), 2)
+ self.assertEqual(i2[0], end_intv)
i2 = PeriodIndex(np.array([end_intv, Period('2005-05-05', 'B')]))
- assert_equal(len(i2), 2)
- assert_equal(i2[0], end_intv)
+ self.assertEqual(len(i2), 2)
+ self.assertEqual(i2[0], end_intv)
# Mixed freq should fail
vals = [end_intv, Period('2006-12-31', 'w')]
@@ -3124,9 +3129,9 @@ def _check_all_fields(self, periodindex):
for field in fields:
field_idx = getattr(periodindex, field)
- assert_equal(len(periodindex), len(field_idx))
+ self.assertEqual(len(periodindex), len(field_idx))
for x, val in zip(periods, field_idx):
- assert_equal(getattr(x, field), val)
+ self.assertEqual(getattr(x, field), val)
def test_is_full(self):
index = PeriodIndex([2005, 2007, 2009], freq='A')
@@ -3327,8 +3332,8 @@ class TestMethods(tm.TestCase):
def test_add(self):
dt1 = Period(freq='D', year=2008, month=1, day=1)
dt2 = Period(freq='D', year=2008, month=1, day=2)
- assert_equal(dt1 + 1, dt2)
- assert_equal(1 + dt1, dt2)
+ self.assertEqual(dt1 + 1, dt2)
+ self.assertEqual(1 + dt1, dt2)
def test_add_pdnat(self):
p = pd.Period('2011-01', freq='M')
diff --git a/pandas/tseries/tests/test_plotting.py b/pandas/tseries/tests/test_plotting.py
index 0284df9e58933..67df62e1ebb57 100644
--- a/pandas/tseries/tests/test_plotting.py
+++ b/pandas/tseries/tests/test_plotting.py
@@ -4,8 +4,6 @@
from pandas.compat import lrange, zip
import numpy as np
-from numpy.testing.decorators import slow
-
from pandas import Index, Series, DataFrame
from pandas.tseries.index import date_range, bdate_range
@@ -13,7 +11,7 @@
from pandas.tseries.period import period_range, Period, PeriodIndex
from pandas.tseries.resample import DatetimeIndex
-from pandas.util.testing import assert_series_equal, ensure_clean
+from pandas.util.testing import assert_series_equal, ensure_clean, slow
import pandas.util.testing as tm
from pandas.tests.test_graphics import _skip_if_no_scipy_gaussian_kde
diff --git a/pandas/tseries/tests/test_timedeltas.py b/pandas/tseries/tests/test_timedeltas.py
index 8d02c43e68be3..20098488f7f1c 100644
--- a/pandas/tseries/tests/test_timedeltas.py
+++ b/pandas/tseries/tests/test_timedeltas.py
@@ -16,7 +16,6 @@
from pandas.tseries.timedeltas import _coerce_scalar_to_timedelta_type as ct
from pandas.util.testing import (assert_series_equal, assert_frame_equal,
assert_almost_equal, assert_index_equal)
-from numpy.testing import assert_allclose
from pandas.tseries.offsets import Day, Second
import pandas.util.testing as tm
from numpy.random import randn
@@ -1224,7 +1223,7 @@ def test_total_seconds(self):
freq='s')
expt = [1 * 86400 + 10 * 3600 + 11 * 60 + 12 + 100123456. / 1e9,
1 * 86400 + 10 * 3600 + 11 * 60 + 13 + 100123456. / 1e9]
- assert_allclose(rng.total_seconds(), expt, atol=1e-10, rtol=0)
+ tm.assert_almost_equal(rng.total_seconds(), expt)
# test Series
s = Series(rng)
@@ -1239,14 +1238,14 @@ def test_total_seconds(self):
# with both nat
s = Series([np.nan, np.nan], dtype='timedelta64[ns]')
- tm.assert_series_equal(s.dt.total_seconds(), Series(
- [np.nan, np.nan], index=[0, 1]))
+ tm.assert_series_equal(s.dt.total_seconds(),
+ Series([np.nan, np.nan], index=[0, 1]))
def test_total_seconds_scalar(self):
# GH 10939
rng = Timedelta('1 days, 10:11:12.100123456')
expt = 1 * 86400 + 10 * 3600 + 11 * 60 + 12 + 100123456. / 1e9
- assert_allclose(rng.total_seconds(), expt, atol=1e-10, rtol=0)
+ tm.assert_almost_equal(rng.total_seconds(), expt)
rng = Timedelta(np.nan)
self.assertTrue(np.isnan(rng.total_seconds()))
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 1564c0a81585e..3a3315ed3890c 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -5,7 +5,6 @@
import warnings
from datetime import datetime, time, timedelta
from numpy.random import rand
-from numpy.testing.decorators import slow
import nose
import numpy as np
@@ -31,7 +30,7 @@
from pandas.tslib import iNaT
from pandas.util.testing import (
assert_frame_equal, assert_series_equal, assert_almost_equal,
- _skip_if_has_locale)
+ _skip_if_has_locale, slow)
randn = np.random.randn
@@ -1110,8 +1109,8 @@ def test_asfreq_keep_index_name(self):
index = pd.date_range('20130101', periods=20, name=index_name)
df = pd.DataFrame([x for x in range(20)], columns=['foo'], index=index)
- tm.assert_equal(index_name, df.index.name)
- tm.assert_equal(index_name, df.asfreq('10D').index.name)
+ self.assertEqual(index_name, df.index.name)
+ self.assertEqual(index_name, df.asfreq('10D').index.name)
def test_promote_datetime_date(self):
rng = date_range('1/1/2000', periods=20)
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index dd66d732ba684..e39dc441bcca4 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -19,6 +19,7 @@
from distutils.version import LooseVersion
from numpy.random import randn, rand
+from numpy.testing.decorators import slow # noqa
import numpy as np
import pandas as pd
| - [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
Not to use `np.testing.assert_equal`, as `assert_numpy_array_equal` is more strict to check dtypes.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13263 | 2016-05-23T21:59:27Z | 2016-05-26T16:10:52Z | null | 2016-05-26T21:17:49Z |
BUG: remove_unused_categories dtype coerces to int64 | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index 4b3c96da10efd..de987edcdc679 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -184,3 +184,5 @@ Bug Fixes
- Bug in ``groupby`` where ``apply`` returns different result depending on whether first result is ``None`` or not (:issue:`12824`)
+
+- Bug in ``Categorical.remove_unused_categories()`` changes ``.codes`` dtype to platform int (:issue:`13261`)
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index 44c91862227d8..ea6e9012f7e8a 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -883,8 +883,8 @@ def remove_unused_categories(self, inplace=False):
if idx.size != 0 and idx[0] == -1: # na sentinel
idx, inv = idx[1:], inv - 1
- cat._codes = inv
cat._categories = cat.categories.take(idx)
+ cat._codes = _coerce_indexer_dtype(inv, self._categories)
if not inplace:
return cat
diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py
index 40ef5354e91bd..5a0d079efb4c2 100644
--- a/pandas/tests/test_categorical.py
+++ b/pandas/tests/test_categorical.py
@@ -1022,14 +1022,14 @@ def f():
def test_remove_unused_categories(self):
c = Categorical(["a", "b", "c", "d", "a"],
categories=["a", "b", "c", "d", "e"])
- exp_categories_all = np.array(["a", "b", "c", "d", "e"])
- exp_categories_dropped = np.array(["a", "b", "c", "d"])
+ exp_categories_all = Index(["a", "b", "c", "d", "e"])
+ exp_categories_dropped = Index(["a", "b", "c", "d"])
self.assert_numpy_array_equal(c.categories, exp_categories_all)
res = c.remove_unused_categories()
- self.assert_numpy_array_equal(res.categories, exp_categories_dropped)
- self.assert_numpy_array_equal(c.categories, exp_categories_all)
+ self.assert_index_equal(res.categories, exp_categories_dropped)
+ self.assert_index_equal(c.categories, exp_categories_all)
res = c.remove_unused_categories(inplace=True)
self.assert_numpy_array_equal(c.categories, exp_categories_dropped)
@@ -1039,15 +1039,18 @@ def test_remove_unused_categories(self):
c = Categorical(["a", "b", "c", np.nan],
categories=["a", "b", "c", "d", "e"])
res = c.remove_unused_categories()
- self.assert_numpy_array_equal(res.categories,
- np.array(["a", "b", "c"]))
- self.assert_numpy_array_equal(c.categories, exp_categories_all)
+ self.assert_index_equal(res.categories,
+ Index(np.array(["a", "b", "c"])))
+ exp_codes = np.array([0, 1, 2, -1], dtype=np.int8)
+ self.assert_numpy_array_equal(res.codes, exp_codes)
+ self.assert_index_equal(c.categories, exp_categories_all)
val = ['F', np.nan, 'D', 'B', 'D', 'F', np.nan]
cat = pd.Categorical(values=val, categories=list('ABCDEFG'))
out = cat.remove_unused_categories()
- self.assert_numpy_array_equal(out.categories, ['B', 'D', 'F'])
- self.assert_numpy_array_equal(out.codes, [2, -1, 1, 0, 1, 2, -1])
+ self.assert_index_equal(out.categories, Index(['B', 'D', 'F']))
+ exp_codes = np.array([2, -1, 1, 0, 1, 2, -1], dtype=np.int8)
+ self.assert_numpy_array_equal(out.codes, exp_codes)
self.assertEqual(out.get_values().tolist(), val)
alpha = list('abcdefghijklmnopqrstuvwxyz')
| - [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
```
c = pd.Categorical(['a', 'b'], categories=['a', 'b', 'c'])
c.codes
# array([0, 1], dtype=int8)
# NG, must be int8 dtype
c = c.remove_unused_categories()
c.codes
# array([0, 1])
```
It is because `np.unique` uses platform int for `unique_inverse`
```
np.unique(np.array([0, 3, 2, 3], dtype=np.int8), return_inverse=True)
(array([0, 2, 3], dtype=int8), array([0, 2, 1, 2]))
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/13261 | 2016-05-23T21:54:00Z | 2016-05-24T13:23:37Z | null | 2016-05-24T13:55:19Z |
ENH: Allow to_sql to recognize single sql type #11886 | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index dfb5ebc9379b1..f71ee1e1369bb 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -74,6 +74,8 @@ Other enhancements
pd.Timestamp(year=2012, month=1, day=1, hour=8, minute=30)
+- ``DataFrame.to_sql `` now allows a single value as the SQL type for all columns (:issue:`11886`).
+
- The ``pd.read_csv()`` with ``engine='python'`` has gained support for the ``decimal`` option (:issue:`12933`)
- ``Index.astype()`` now accepts an optional boolean argument ``copy``, which allows optional copying if the requirements on dtype are satisfied (:issue:`13209`)
@@ -89,7 +91,7 @@ Other enhancements
- ``pd.read_html()`` has gained support for the ``decimal`` option (:issue:`12907`)
-
+- ``DataFrame.to_sql `` now allows a single value as the SQL type for all columns (:issue:`11886`).
.. _whatsnew_0182.api:
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 324988360c9fe..1e9771b140ff2 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -18,6 +18,7 @@
string_types, text_type)
from pandas.core.api import DataFrame, Series
from pandas.core.common import isnull
+from pandas.core.generic import is_dictlike
from pandas.core.base import PandasObject
from pandas.types.api import DatetimeTZDtype
from pandas.tseries.tools import to_datetime
@@ -550,9 +551,10 @@ def to_sql(frame, name, con, flavor='sqlite', schema=None, if_exists='fail',
chunksize : int, default None
If not None, then rows will be written in batches of this size at a
time. If None, all rows will be written at once.
- dtype : dict of column name to SQL type, default None
+ dtype : single SQLtype or dict of column name to SQL type, default None
Optional specifying the datatype for columns. The SQL type should
be a SQLAlchemy type, or a string for sqlite3 fallback connection.
+ If all columns are of the same type, one single value can be used.
"""
if if_exists not in ('fail', 'replace', 'append'):
@@ -1231,11 +1233,16 @@ def to_sql(self, frame, name, if_exists='fail', index=True,
chunksize : int, default None
If not None, then rows will be written in batches of this size at a
time. If None, all rows will be written at once.
- dtype : dict of column name to SQL type, default None
+ dtype : single SQL type or dict of column name to SQL type, default
+ None
Optional specifying the datatype for columns. The SQL type should
- be a SQLAlchemy type.
+ be a SQLAlchemy type. If all columns are of the same type, one
+ single value can be used.
"""
+ if dtype and not is_dictlike(dtype):
+ dtype = {col_name: dtype for col_name in frame}
+
if dtype is not None:
from sqlalchemy.types import to_instance, TypeEngine
for col, my_type in dtype.items():
@@ -1644,11 +1651,15 @@ def to_sql(self, frame, name, if_exists='fail', index=True,
chunksize : int, default None
If not None, then rows will be written in batches of this
size at a time. If None, all rows will be written at once.
- dtype : dict of column name to SQL type, default None
+ dtype : single SQL type or dict of column name to SQL type, default
+ None
Optional specifying the datatype for columns. The SQL type should
- be a string.
+ be a string. If all columns are of the same type, one single value
+ can be used.
"""
+ if dtype and not is_dictlike(dtype):
+ dtype = {col_name: dtype for col_name in frame}
if dtype is not None:
for col, my_type in dtype.items():
if not isinstance(my_type, str):
diff --git a/pandas/io/tests/parser/common.py b/pandas/io/tests/parser/common.py
index 2be0c4edb8f5d..a292c0fe04f40 100644
--- a/pandas/io/tests/parser/common.py
+++ b/pandas/io/tests/parser/common.py
@@ -1302,17 +1302,17 @@ def test_read_duplicate_names(self):
def test_inf_parsing(self):
data = """\
-,A
-a,inf
-b,-inf
-c,+Inf
-d,-Inf
-e,INF
-f,-INF
-g,+INf
-h,-INf
-i,inF
-j,-inF"""
+ ,A
+ a,inf
+ b,-inf
+ c,+Inf
+ d,-Inf
+ e,INF
+ f,-INF
+ g,+INf
+ h,-INf
+ i,inF
+ j,-inF"""
inf = float('inf')
expected = Series([inf, -inf] * 5)
diff --git a/pandas/io/tests/test_sql.py b/pandas/io/tests/test_sql.py
index 9a995c17f0445..621c34ff75ce8 100644
--- a/pandas/io/tests/test_sql.py
+++ b/pandas/io/tests/test_sql.py
@@ -1552,6 +1552,19 @@ def test_dtype(self):
self.assertTrue(isinstance(sqltype, sqlalchemy.String))
self.assertEqual(sqltype.length, 10)
+ def test_to_sql_single_dtype(self):
+ cols = ['A', 'B']
+ data = [('a', 'b'),
+ ('c', 'd')]
+ df = DataFrame(data, columns=cols)
+ df.to_sql('single_dtype_test', self.conn, dtype=sqlalchemy.TEXT)
+ meta = sqlalchemy.schema.MetaData(bind=self.conn)
+ meta.reflect()
+ sqltypea = meta.tables['single_dtype_test'].columns['A'].type
+ sqltypeb = meta.tables['single_dtype_test'].columns['B'].type
+ self.assertTrue(isinstance(sqltypea, sqlalchemy.TEXT))
+ self.assertTrue(isinstance(sqltypeb, sqlalchemy.TEXT))
+
def test_notnull_dtype(self):
cols = {'Bool': Series([True, None]),
'Date': Series([datetime(2012, 5, 1), None]),
@@ -2025,6 +2038,21 @@ def test_dtype(self):
self.assertRaises(ValueError, df.to_sql,
'error', self.conn, dtype={'B': bool})
+ def test_to_sql_single_dtype(self):
+ if self.flavor == 'mysql':
+ raise nose.SkipTest('Not applicable to MySQL legacy')
+ self.drop_table('single_dtype_test')
+ cols = ['A', 'B']
+ data = [('a', 'b'),
+ ('c', 'd')]
+ df = DataFrame(data, columns=cols)
+ df.to_sql('single_dtype_test', self.conn, dtype='STRING')
+ self.assertEqual(
+ self._get_sqlite_column_type('single_dtype_test', 'A'), 'STRING')
+ self.assertEqual(
+ self._get_sqlite_column_type('single_dtype_test', 'B'), 'STRING')
+ self.drop_table('single_dtype_test')
+
def test_notnull_dtype(self):
if self.flavor == 'mysql':
raise nose.SkipTest('Not applicable to MySQL legacy')
| Follow-up in https://github.com/pydata/pandas/pull/13614
---
This solves #11886
It checks whether the passed dtype variable is a dictionary.
If not, it creates a new dictionary with keys as the columns of the dataframe.
It then passes this dictionary to the pandasSQL_builder.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13252 | 2016-05-21T16:27:20Z | 2016-07-11T10:18:22Z | null | 2023-05-11T01:13:37Z |
TST/ERR: Add Period ops tests / fix error message | diff --git a/pandas/src/period.pyx b/pandas/src/period.pyx
index 670fe1e4f168c..858aa58df8d7d 100644
--- a/pandas/src/period.pyx
+++ b/pandas/src/period.pyx
@@ -799,8 +799,8 @@ cdef class Period(object):
else:
ordinal = self.ordinal + (nanos // offset_nanos)
return Period(ordinal=ordinal, freq=self.freq)
- msg = 'Input cannnot be converted to Period(freq={0})'
- raise ValueError(msg)
+ msg = 'Input cannot be converted to Period(freq={0})'
+ raise IncompatibleFrequency(msg.format(self.freqstr))
elif isinstance(other, offsets.DateOffset):
freqstr = frequencies.get_standard_freq(other)
base = frequencies.get_base_alias(freqstr)
@@ -849,8 +849,8 @@ cdef class Period(object):
return Period(ordinal=ordinal, freq=self.freq)
elif isinstance(other, Period):
if other.freq != self.freq:
- raise ValueError("Cannot do arithmetic with "
- "non-conforming periods")
+ msg = _DIFFERENT_FREQ.format(self.freqstr, other.freqstr)
+ raise IncompatibleFrequency(msg)
if self.ordinal == tslib.iNaT or other.ordinal == tslib.iNaT:
return Period(ordinal=tslib.iNaT, freq=self.freq)
return self.ordinal - other.ordinal
@@ -865,7 +865,6 @@ cdef class Period(object):
else:
return NotImplemented
-
def asfreq(self, freq, how='E'):
"""
Convert Period to desired frequency, either at the start or end of the
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index c5aae1f8ecebb..8ebdcc7acff2d 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -492,8 +492,8 @@ def test_sub_delta(self):
result = left - right
self.assertEqual(result, 4)
- self.assertRaises(ValueError, left.__sub__,
- Period('2007-01', freq='M'))
+ with self.assertRaises(period.IncompatibleFrequency):
+ left - Period('2007-01', freq='M')
def test_to_timestamp(self):
p = Period('1982', freq='A')
@@ -829,9 +829,13 @@ def test_asfreq_MS(self):
self.assertEqual(initial.asfreq(freq="M", how="S"),
Period('2013-01', 'M'))
- self.assertRaises(ValueError, initial.asfreq, freq="MS", how="S")
- tm.assertRaisesRegexp(ValueError, "Unknown freqstr: MS", pd.Period,
- '2013-01', 'MS')
+
+ with self.assertRaisesRegexp(ValueError, "Unknown freqstr"):
+ initial.asfreq(freq="MS", how="S")
+
+ with tm.assertRaisesRegexp(ValueError, "Unknown freqstr: MS"):
+ pd.Period('2013-01', 'MS')
+
self.assertTrue(_period_code_map.get("MS") is None)
@@ -1638,7 +1642,7 @@ def test_constructor_use_start_freq(self):
p = Period('4/2/2012', freq='B')
index = PeriodIndex(start=p, periods=10)
expected = PeriodIndex(start='4/2/2012', periods=10, freq='B')
- self.assertTrue(index.equals(expected))
+ tm.assert_index_equal(index, expected)
def test_constructor_field_arrays(self):
# GH #1264
@@ -1648,13 +1652,13 @@ def test_constructor_field_arrays(self):
index = PeriodIndex(year=years, quarter=quarters, freq='Q-DEC')
expected = period_range('1990Q3', '2009Q2', freq='Q-DEC')
- self.assertTrue(index.equals(expected))
+ tm.assert_index_equal(index, expected)
index2 = PeriodIndex(year=years, quarter=quarters, freq='2Q-DEC')
tm.assert_numpy_array_equal(index.asi8, index2.asi8)
index = PeriodIndex(year=years, quarter=quarters)
- self.assertTrue(index.equals(expected))
+ tm.assert_index_equal(index, expected)
years = [2007, 2007, 2007]
months = [1, 2]
@@ -1669,7 +1673,7 @@ def test_constructor_field_arrays(self):
months = [1, 2, 3]
idx = PeriodIndex(year=years, month=months, freq='M')
exp = period_range('2007-01', periods=3, freq='M')
- self.assertTrue(idx.equals(exp))
+ tm.assert_index_equal(idx, exp)
def test_constructor_U(self):
# U was used as undefined period
@@ -1700,7 +1704,7 @@ def test_constructor_corner(self):
result = period_range('2007-01', periods=10.5, freq='M')
exp = period_range('2007-01', periods=10, freq='M')
- self.assertTrue(result.equals(exp))
+ tm.assert_index_equal(result, exp)
def test_constructor_fromarraylike(self):
idx = period_range('2007-01', periods=20, freq='M')
@@ -1711,29 +1715,29 @@ def test_constructor_fromarraylike(self):
data=Period('2007', freq='A'))
result = PeriodIndex(iter(idx))
- self.assertTrue(result.equals(idx))
+ tm.assert_index_equal(result, idx)
result = PeriodIndex(idx)
- self.assertTrue(result.equals(idx))
+ tm.assert_index_equal(result, idx)
result = PeriodIndex(idx, freq='M')
- self.assertTrue(result.equals(idx))
+ tm.assert_index_equal(result, idx)
result = PeriodIndex(idx, freq=offsets.MonthEnd())
- self.assertTrue(result.equals(idx))
+ tm.assert_index_equal(result, idx)
self.assertTrue(result.freq, 'M')
result = PeriodIndex(idx, freq='2M')
- self.assertTrue(result.equals(idx))
+ tm.assert_index_equal(result, idx.asfreq('2M'))
self.assertTrue(result.freq, '2M')
result = PeriodIndex(idx, freq=offsets.MonthEnd(2))
- self.assertTrue(result.equals(idx))
+ tm.assert_index_equal(result, idx.asfreq('2M'))
self.assertTrue(result.freq, '2M')
result = PeriodIndex(idx, freq='D')
exp = idx.asfreq('D', 'e')
- self.assertTrue(result.equals(exp))
+ tm.assert_index_equal(result, exp)
def test_constructor_datetime64arr(self):
vals = np.arange(100000, 100000 + 10000, 100, dtype=np.int64)
@@ -1744,10 +1748,10 @@ def test_constructor_datetime64arr(self):
def test_constructor_simple_new(self):
idx = period_range('2007-01', name='p', periods=2, freq='M')
result = idx._simple_new(idx, 'p', freq=idx.freq)
- self.assertTrue(result.equals(idx))
+ tm.assert_index_equal(result, idx)
result = idx._simple_new(idx.astype('i8'), 'p', freq=idx.freq)
- self.assertTrue(result.equals(idx))
+ tm.assert_index_equal(result, idx)
result = idx._simple_new(
[pd.Period('2007-01', freq='M'), pd.Period('2007-02', freq='M')],
@@ -1801,14 +1805,14 @@ def test_constructor_freq_mult(self):
for func in [PeriodIndex, period_range]:
# must be the same, but for sure...
pidx = func(start='2014-01', freq='2M', periods=4)
- expected = PeriodIndex(
- ['2014-01', '2014-03', '2014-05', '2014-07'], freq='M')
+ expected = PeriodIndex(['2014-01', '2014-03',
+ '2014-05', '2014-07'], freq='2M')
tm.assert_index_equal(pidx, expected)
pidx = func(start='2014-01-02', end='2014-01-15', freq='3D')
expected = PeriodIndex(['2014-01-02', '2014-01-05',
'2014-01-08', '2014-01-11',
- '2014-01-14'], freq='D')
+ '2014-01-14'], freq='3D')
tm.assert_index_equal(pidx, expected)
pidx = func(end='2014-01-01 17:00', freq='4H', periods=3)
@@ -1837,7 +1841,7 @@ def test_constructor_freq_mult_dti_compat(self):
freqstr = str(mult) + freq
pidx = PeriodIndex(start='2014-04-01', freq=freqstr, periods=10)
expected = date_range(start='2014-04-01', freq=freqstr,
- periods=10).to_period(freq)
+ periods=10).to_period(freqstr)
tm.assert_index_equal(pidx, expected)
def test_is_(self):
@@ -1965,11 +1969,11 @@ def test_sub(self):
result = rng - 5
exp = rng + (-5)
- self.assertTrue(result.equals(exp))
+ tm.assert_index_equal(result, exp)
def test_periods_number_check(self):
- self.assertRaises(ValueError, period_range, '2011-1-1', '2012-1-1',
- 'B')
+ with tm.assertRaises(ValueError):
+ period_range('2011-1-1', '2012-1-1', 'B')
def test_tolist(self):
index = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009')
@@ -1977,7 +1981,7 @@ def test_tolist(self):
[tm.assertIsInstance(x, Period) for x in rs]
recon = PeriodIndex(rs)
- self.assertTrue(index.equals(recon))
+ tm.assert_index_equal(index, recon)
def test_to_timestamp(self):
index = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009')
@@ -1985,12 +1989,12 @@ def test_to_timestamp(self):
exp_index = date_range('1/1/2001', end='12/31/2009', freq='A-DEC')
result = series.to_timestamp(how='end')
- self.assertTrue(result.index.equals(exp_index))
+ tm.assert_index_equal(result.index, exp_index)
self.assertEqual(result.name, 'foo')
exp_index = date_range('1/1/2001', end='1/1/2009', freq='AS-JAN')
result = series.to_timestamp(how='start')
- self.assertTrue(result.index.equals(exp_index))
+ tm.assert_index_equal(result.index, exp_index)
def _get_with_delta(delta, freq='A-DEC'):
return date_range(to_datetime('1/1/2001') + delta,
@@ -1999,17 +2003,17 @@ def _get_with_delta(delta, freq='A-DEC'):
delta = timedelta(hours=23)
result = series.to_timestamp('H', 'end')
exp_index = _get_with_delta(delta)
- self.assertTrue(result.index.equals(exp_index))
+ tm.assert_index_equal(result.index, exp_index)
delta = timedelta(hours=23, minutes=59)
result = series.to_timestamp('T', 'end')
exp_index = _get_with_delta(delta)
- self.assertTrue(result.index.equals(exp_index))
+ tm.assert_index_equal(result.index, exp_index)
result = series.to_timestamp('S', 'end')
delta = timedelta(hours=23, minutes=59, seconds=59)
exp_index = _get_with_delta(delta)
- self.assertTrue(result.index.equals(exp_index))
+ tm.assert_index_equal(result.index, exp_index)
index = PeriodIndex(freq='H', start='1/1/2001', end='1/2/2001')
series = Series(1, index=index, name='foo')
@@ -2017,7 +2021,7 @@ def _get_with_delta(delta, freq='A-DEC'):
exp_index = date_range('1/1/2001 00:59:59', end='1/2/2001 00:59:59',
freq='H')
result = series.to_timestamp(how='end')
- self.assertTrue(result.index.equals(exp_index))
+ tm.assert_index_equal(result.index, exp_index)
self.assertEqual(result.name, 'foo')
def test_to_timestamp_quarterly_bug(self):
@@ -2028,7 +2032,7 @@ def test_to_timestamp_quarterly_bug(self):
stamps = pindex.to_timestamp('D', 'end')
expected = DatetimeIndex([x.to_timestamp('D', 'end') for x in pindex])
- self.assertTrue(stamps.equals(expected))
+ tm.assert_index_equal(stamps, expected)
def test_to_timestamp_preserve_name(self):
index = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009',
@@ -2054,11 +2058,11 @@ def test_to_timestamp_pi_nat(self):
result = index.to_timestamp('D')
expected = DatetimeIndex([pd.NaT, datetime(2011, 1, 1),
datetime(2011, 2, 1)], name='idx')
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
self.assertEqual(result.name, 'idx')
result2 = result.to_period(freq='M')
- self.assertTrue(result2.equals(index))
+ tm.assert_index_equal(result2, index)
self.assertEqual(result2.name, 'idx')
result3 = result.to_period(freq='3M')
@@ -2085,12 +2089,12 @@ def test_to_timestamp_pi_mult(self):
def test_start_time(self):
index = PeriodIndex(freq='M', start='2016-01-01', end='2016-05-31')
expected_index = date_range('2016-01-01', end='2016-05-31', freq='MS')
- self.assertTrue(index.start_time.equals(expected_index))
+ tm.assert_index_equal(index.start_time, expected_index)
def test_end_time(self):
index = PeriodIndex(freq='M', start='2016-01-01', end='2016-05-31')
expected_index = date_range('2016-01-01', end='2016-05-31', freq='M')
- self.assertTrue(index.end_time.equals(expected_index))
+ tm.assert_index_equal(index.end_time, expected_index)
def test_as_frame_columns(self):
rng = period_range('1/1/2000', periods=5)
@@ -2115,17 +2119,18 @@ def test_indexing(self):
self.assertEqual(expected, result)
def test_frame_setitem(self):
- rng = period_range('1/1/2000', periods=5)
- rng.name = 'index'
+ rng = period_range('1/1/2000', periods=5, name='index')
df = DataFrame(randn(5, 3), index=rng)
df['Index'] = rng
rs = Index(df['Index'])
- self.assertTrue(rs.equals(rng))
+ tm.assert_index_equal(rs, rng, check_names=False)
+ self.assertEqual(rs.name, 'Index')
+ self.assertEqual(rng.name, 'index')
rs = df.reset_index().set_index('index')
tm.assertIsInstance(rs.index, PeriodIndex)
- self.assertTrue(rs.index.equals(rng))
+ tm.assert_index_equal(rs.index, rng)
def test_period_set_index_reindex(self):
# GH 6631
@@ -2134,9 +2139,9 @@ def test_period_set_index_reindex(self):
idx2 = period_range('2013', periods=6, freq='A')
df = df.set_index(idx1)
- self.assertTrue(df.index.equals(idx1))
+ tm.assert_index_equal(df.index, idx1)
df = df.set_index(idx2)
- self.assertTrue(df.index.equals(idx2))
+ tm.assert_index_equal(df.index, idx2)
def test_frame_to_time_stamp(self):
K = 5
@@ -2146,12 +2151,12 @@ def test_frame_to_time_stamp(self):
exp_index = date_range('1/1/2001', end='12/31/2009', freq='A-DEC')
result = df.to_timestamp('D', 'end')
- self.assertTrue(result.index.equals(exp_index))
+ tm.assert_index_equal(result.index, exp_index)
assert_almost_equal(result.values, df.values)
exp_index = date_range('1/1/2001', end='1/1/2009', freq='AS-JAN')
result = df.to_timestamp('D', 'start')
- self.assertTrue(result.index.equals(exp_index))
+ tm.assert_index_equal(result.index, exp_index)
def _get_with_delta(delta, freq='A-DEC'):
return date_range(to_datetime('1/1/2001') + delta,
@@ -2160,44 +2165,44 @@ def _get_with_delta(delta, freq='A-DEC'):
delta = timedelta(hours=23)
result = df.to_timestamp('H', 'end')
exp_index = _get_with_delta(delta)
- self.assertTrue(result.index.equals(exp_index))
+ tm.assert_index_equal(result.index, exp_index)
delta = timedelta(hours=23, minutes=59)
result = df.to_timestamp('T', 'end')
exp_index = _get_with_delta(delta)
- self.assertTrue(result.index.equals(exp_index))
+ tm.assert_index_equal(result.index, exp_index)
result = df.to_timestamp('S', 'end')
delta = timedelta(hours=23, minutes=59, seconds=59)
exp_index = _get_with_delta(delta)
- self.assertTrue(result.index.equals(exp_index))
+ tm.assert_index_equal(result.index, exp_index)
# columns
df = df.T
exp_index = date_range('1/1/2001', end='12/31/2009', freq='A-DEC')
result = df.to_timestamp('D', 'end', axis=1)
- self.assertTrue(result.columns.equals(exp_index))
+ tm.assert_index_equal(result.columns, exp_index)
assert_almost_equal(result.values, df.values)
exp_index = date_range('1/1/2001', end='1/1/2009', freq='AS-JAN')
result = df.to_timestamp('D', 'start', axis=1)
- self.assertTrue(result.columns.equals(exp_index))
+ tm.assert_index_equal(result.columns, exp_index)
delta = timedelta(hours=23)
result = df.to_timestamp('H', 'end', axis=1)
exp_index = _get_with_delta(delta)
- self.assertTrue(result.columns.equals(exp_index))
+ tm.assert_index_equal(result.columns, exp_index)
delta = timedelta(hours=23, minutes=59)
result = df.to_timestamp('T', 'end', axis=1)
exp_index = _get_with_delta(delta)
- self.assertTrue(result.columns.equals(exp_index))
+ tm.assert_index_equal(result.columns, exp_index)
result = df.to_timestamp('S', 'end', axis=1)
delta = timedelta(hours=23, minutes=59, seconds=59)
exp_index = _get_with_delta(delta)
- self.assertTrue(result.columns.equals(exp_index))
+ tm.assert_index_equal(result.columns, exp_index)
# invalid axis
assertRaisesRegexp(ValueError, 'axis', df.to_timestamp, axis=2)
@@ -2351,7 +2356,7 @@ def test_shift(self):
pi1 = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009')
pi2 = PeriodIndex(freq='A', start='1/1/2002', end='12/1/2010')
- self.assertTrue(pi1.shift(0).equals(pi1))
+ tm.assert_index_equal(pi1.shift(0), pi1)
assert_equal(len(pi1), len(pi2))
assert_equal(pi1.shift(1).values, pi2.values)
@@ -2385,25 +2390,25 @@ def test_shift_nat(self):
idx = PeriodIndex(['2011-01', '2011-02', 'NaT',
'2011-04'], freq='M', name='idx')
result = idx.shift(1)
- expected = PeriodIndex(
- ['2011-02', '2011-03', 'NaT', '2011-05'], freq='M', name='idx')
- self.assertTrue(result.equals(expected))
+ expected = PeriodIndex(['2011-02', '2011-03', 'NaT',
+ '2011-05'], freq='M', name='idx')
+ tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
def test_shift_ndarray(self):
idx = PeriodIndex(['2011-01', '2011-02', 'NaT',
'2011-04'], freq='M', name='idx')
result = idx.shift(np.array([1, 2, 3, 4]))
- expected = PeriodIndex(
- ['2011-02', '2011-04', 'NaT', '2011-08'], freq='M', name='idx')
- self.assertTrue(result.equals(expected))
+ expected = PeriodIndex(['2011-02', '2011-04', 'NaT',
+ '2011-08'], freq='M', name='idx')
+ tm.assert_index_equal(result, expected)
idx = PeriodIndex(['2011-01', '2011-02', 'NaT',
'2011-04'], freq='M', name='idx')
result = idx.shift(np.array([1, -2, 3, -4]))
- expected = PeriodIndex(
- ['2011-02', '2010-12', 'NaT', '2010-12'], freq='M', name='idx')
- self.assertTrue(result.equals(expected))
+ expected = PeriodIndex(['2011-02', '2010-12', 'NaT',
+ '2010-12'], freq='M', name='idx')
+ tm.assert_index_equal(result, expected)
def test_asfreq(self):
pi1 = PeriodIndex(freq='A', start='1/1/2001', end='1/1/2001')
@@ -2477,7 +2482,7 @@ def test_asfreq_nat(self):
idx = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-04'], freq='M')
result = idx.asfreq(freq='Q')
expected = PeriodIndex(['2011Q1', '2011Q1', 'NaT', '2011Q2'], freq='Q')
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
def test_asfreq_mult_pi(self):
pi = PeriodIndex(['2001-01', '2001-02', 'NaT', '2001-03'], freq='2M')
@@ -2576,12 +2581,12 @@ def test_asfreq_ts(self):
df_result = df.asfreq('D', how='end')
exp_index = index.asfreq('D', how='end')
self.assertEqual(len(result), len(ts))
- self.assertTrue(result.index.equals(exp_index))
- self.assertTrue(df_result.index.equals(exp_index))
+ tm.assert_index_equal(result.index, exp_index)
+ tm.assert_index_equal(df_result.index, exp_index)
result = ts.asfreq('D', how='start')
self.assertEqual(len(result), len(ts))
- self.assertTrue(result.index.equals(index.asfreq('D', how='start')))
+ tm.assert_index_equal(result.index, index.asfreq('D', how='start'))
def test_badinput(self):
self.assertRaises(datetools.DateParseError, Period, '1/1/-2000', 'A')
@@ -2783,11 +2788,11 @@ def test_pindex_qaccess(self):
def test_period_dt64_round_trip(self):
dti = date_range('1/1/2000', '1/7/2002', freq='B')
pi = dti.to_period()
- self.assertTrue(pi.to_timestamp().equals(dti))
+ tm.assert_index_equal(pi.to_timestamp(), dti)
dti = date_range('1/1/2000', '1/7/2002', freq='B')
pi = dti.to_period(freq='H')
- self.assertTrue(pi.to_timestamp().equals(dti))
+ tm.assert_index_equal(pi.to_timestamp(), dti)
def test_to_period_quarterly(self):
# make sure we can make the round trip
@@ -2796,7 +2801,7 @@ def test_to_period_quarterly(self):
rng = period_range('1989Q3', '1991Q3', freq=freq)
stamps = rng.to_timestamp()
result = stamps.to_period(freq)
- self.assertTrue(rng.equals(result))
+ tm.assert_index_equal(rng, result)
def test_to_period_quarterlyish(self):
offsets = ['BQ', 'QS', 'BQS']
@@ -2841,7 +2846,7 @@ def test_multiples(self):
def test_pindex_multiples(self):
pi = PeriodIndex(start='1/1/11', end='12/31/11', freq='2M')
expected = PeriodIndex(['2011-01', '2011-03', '2011-05', '2011-07',
- '2011-09', '2011-11'], freq='M')
+ '2011-09', '2011-11'], freq='2M')
tm.assert_index_equal(pi, expected)
self.assertEqual(pi.freq, offsets.MonthEnd(2))
self.assertEqual(pi.freqstr, '2M')
@@ -2874,7 +2879,7 @@ def test_take(self):
taken2 = index[[5, 6, 8, 12]]
for taken in [taken1, taken2]:
- self.assertTrue(taken.equals(expected))
+ tm.assert_index_equal(taken, expected)
tm.assertIsInstance(taken, PeriodIndex)
self.assertEqual(taken.freq, index.freq)
self.assertEqual(taken.name, expected.name)
@@ -2954,7 +2959,7 @@ def test_align_series(self):
for kind in ['inner', 'outer', 'left', 'right']:
ts.align(ts[::2], join=kind)
msg = "Input has different freq=D from PeriodIndex\\(freq=A-DEC\\)"
- with assertRaisesRegexp(ValueError, msg):
+ with assertRaisesRegexp(period.IncompatibleFrequency, msg):
ts + ts.asfreq('D', how="end")
def test_align_frame(self):
@@ -2973,11 +2978,11 @@ def test_union(self):
index = period_range('1/1/2000', '1/20/2000', freq='D')
result = index[:-5].union(index[10:])
- self.assertTrue(result.equals(index))
+ tm.assert_index_equal(result, index)
# not in order
result = _permute(index[:-5]).union(_permute(index[10:]))
- self.assertTrue(result.equals(index))
+ tm.assert_index_equal(result, index)
# raise if different frequencies
index = period_range('1/1/2000', '1/20/2000', freq='D')
@@ -3008,13 +3013,13 @@ def test_intersection(self):
index = period_range('1/1/2000', '1/20/2000', freq='D')
result = index[:-5].intersection(index[10:])
- self.assertTrue(result.equals(index[10:-5]))
+ tm.assert_index_equal(result, index[10:-5])
# not in order
left = _permute(index[:-5])
right = _permute(index[10:])
result = left.intersection(right).sort_values()
- self.assertTrue(result.equals(index[10:-5]))
+ tm.assert_index_equal(result, index[10:-5])
# raise if different frequencies
index = period_range('1/1/2000', '1/20/2000', freq='D')
@@ -3045,7 +3050,7 @@ def test_intersection_cases(self):
for (rng, expected) in [(rng2, expected2), (rng3, expected3),
(rng4, expected4)]:
result = base.intersection(rng)
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(result.freq, expected.freq)
@@ -3071,7 +3076,7 @@ def test_intersection_cases(self):
for (rng, expected) in [(rng2, expected2), (rng3, expected3),
(rng4, expected4)]:
result = base.intersection(rng)
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
self.assertEqual(result.name, expected.name)
self.assertEqual(result.freq, 'D')
@@ -3151,7 +3156,7 @@ def test_map(self):
index = PeriodIndex([2005, 2007, 2009], freq='A')
result = index.map(lambda x: x + 1)
expected = index + 1
- self.assertTrue(result.equals(expected))
+ tm.assert_index_equal(result, expected)
result = index.map(lambda x: x.ordinal)
exp = [x.ordinal for x in index]
@@ -3252,11 +3257,11 @@ def test_factorize(self):
arr, idx = idx1.factorize()
self.assert_numpy_array_equal(arr, exp_arr)
- self.assertTrue(idx.equals(exp_idx))
+ tm.assert_index_equal(idx, exp_idx)
arr, idx = idx1.factorize(sort=True)
self.assert_numpy_array_equal(arr, exp_arr)
- self.assertTrue(idx.equals(exp_idx))
+ tm.assert_index_equal(idx, exp_idx)
idx2 = pd.PeriodIndex(['2014-03', '2014-03', '2014-02', '2014-01',
'2014-03', '2014-01'], freq='M')
@@ -3264,19 +3269,19 @@ def test_factorize(self):
exp_arr = np.array([2, 2, 1, 0, 2, 0])
arr, idx = idx2.factorize(sort=True)
self.assert_numpy_array_equal(arr, exp_arr)
- self.assertTrue(idx.equals(exp_idx))
+ tm.assert_index_equal(idx, exp_idx)
exp_arr = np.array([0, 0, 1, 2, 0, 2])
exp_idx = PeriodIndex(['2014-03', '2014-02', '2014-01'], freq='M')
arr, idx = idx2.factorize()
self.assert_numpy_array_equal(arr, exp_arr)
- self.assertTrue(idx.equals(exp_idx))
+ tm.assert_index_equal(idx, exp_idx)
def test_recreate_from_data(self):
for o in ['M', 'Q', 'A', 'D', 'B', 'T', 'S', 'L', 'U', 'N', 'H']:
org = PeriodIndex(start='2001/04/01', freq=o, periods=1)
idx = PeriodIndex(org.values, freq=o)
- self.assertTrue(idx.equals(org))
+ tm.assert_index_equal(idx, org)
def test_combine_first(self):
# GH 3367
@@ -3324,7 +3329,6 @@ def _permute(obj):
class TestMethods(tm.TestCase):
- "Base test class for MaskedArrays."
def test_add(self):
dt1 = Period(freq='D', year=2008, month=1, day=1)
@@ -3356,6 +3360,17 @@ def test_add_raises(self):
with tm.assertRaisesRegexp(TypeError, msg):
dt1 + dt2
+ def test_sub(self):
+ dt1 = Period('2011-01-01', freq='D')
+ dt2 = Period('2011-01-15', freq='D')
+
+ self.assertEqual(dt1 - dt2, -14)
+ self.assertEqual(dt2 - dt1, 14)
+
+ msg = "Input has different freq=M from Period\(freq=D\)"
+ with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg):
+ dt1 - pd.Period('2011-02', freq='M')
+
def test_add_offset(self):
# freq is DateOffset
for freq in ['A', '2A', '3A']:
@@ -3367,14 +3382,14 @@ def test_add_offset(self):
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(365, 'D'),
timedelta(365)]:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
p + o
if isinstance(o, np.timedelta64):
with tm.assertRaises(TypeError):
o + p
else:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
o + p
for freq in ['M', '2M', '3M']:
@@ -3390,14 +3405,14 @@ def test_add_offset(self):
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(365, 'D'),
timedelta(365)]:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
p + o
if isinstance(o, np.timedelta64):
with tm.assertRaises(TypeError):
o + p
else:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
o + p
# freq is Tick
@@ -3433,14 +3448,14 @@ def test_add_offset(self):
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(4, 'h'),
timedelta(hours=23)]:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
p + o
if isinstance(o, np.timedelta64):
with tm.assertRaises(TypeError):
o + p
else:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
o + p
for freq in ['H', '2H', '3H']:
@@ -3475,14 +3490,14 @@ def test_add_offset(self):
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(3200, 's'),
timedelta(hours=23, minutes=30)]:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
p + o
if isinstance(o, np.timedelta64):
with tm.assertRaises(TypeError):
o + p
else:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
o + p
def test_add_offset_nat(self):
@@ -3496,14 +3511,14 @@ def test_add_offset_nat(self):
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(365, 'D'),
timedelta(365)]:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
p + o
if isinstance(o, np.timedelta64):
with tm.assertRaises(TypeError):
o + p
else:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
o + p
for freq in ['M', '2M', '3M']:
@@ -3520,14 +3535,14 @@ def test_add_offset_nat(self):
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(365, 'D'),
timedelta(365)]:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
p + o
if isinstance(o, np.timedelta64):
with tm.assertRaises(TypeError):
o + p
else:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
o + p
# freq is Tick
for freq in ['D', '2D', '3D']:
@@ -3547,14 +3562,14 @@ def test_add_offset_nat(self):
offsets.Minute(), np.timedelta64(4, 'h'),
timedelta(hours=23)]:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
p + o
if isinstance(o, np.timedelta64):
with tm.assertRaises(TypeError):
o + p
else:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
o + p
for freq in ['H', '2H', '3H']:
@@ -3570,14 +3585,14 @@ def test_add_offset_nat(self):
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(3200, 's'),
timedelta(hours=23, minutes=30)]:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
p + o
if isinstance(o, np.timedelta64):
with tm.assertRaises(TypeError):
o + p
else:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
o + p
def test_sub_pdnat(self):
@@ -3599,7 +3614,7 @@ def test_sub_offset(self):
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(365, 'D'),
timedelta(365)]:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
p - o
for freq in ['M', '2M', '3M']:
@@ -3612,7 +3627,7 @@ def test_sub_offset(self):
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(365, 'D'),
timedelta(365)]:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
p - o
# freq is Tick
@@ -3634,7 +3649,7 @@ def test_sub_offset(self):
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(4, 'h'),
timedelta(hours=23)]:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
p - o
for freq in ['H', '2H', '3H']:
@@ -3655,7 +3670,7 @@ def test_sub_offset(self):
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(3200, 's'),
timedelta(hours=23, minutes=30)]:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
p - o
def test_sub_offset_nat(self):
@@ -3668,7 +3683,7 @@ def test_sub_offset_nat(self):
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(365, 'D'),
timedelta(365)]:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
p - o
for freq in ['M', '2M', '3M']:
@@ -3679,7 +3694,7 @@ def test_sub_offset_nat(self):
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(365, 'D'),
timedelta(365)]:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
p - o
# freq is Tick
@@ -3693,7 +3708,7 @@ def test_sub_offset_nat(self):
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(4, 'h'),
timedelta(hours=23)]:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
p - o
for freq in ['H', '2H', '3H']:
@@ -3706,7 +3721,7 @@ def test_sub_offset_nat(self):
for o in [offsets.YearBegin(2), offsets.MonthBegin(1),
offsets.Minute(), np.timedelta64(3200, 's'),
timedelta(hours=23, minutes=30)]:
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
p - o
def test_nat_ops(self):
@@ -3715,77 +3730,153 @@ def test_nat_ops(self):
self.assertEqual((p + 1).ordinal, tslib.iNaT)
self.assertEqual((1 + p).ordinal, tslib.iNaT)
self.assertEqual((p - 1).ordinal, tslib.iNaT)
- self.assertEqual(
- (p - Period('2011-01', freq=freq)).ordinal, tslib.iNaT)
- self.assertEqual(
- (Period('2011-01', freq=freq) - p).ordinal, tslib.iNaT)
+ self.assertEqual((p - Period('2011-01', freq=freq)).ordinal,
+ tslib.iNaT)
+ self.assertEqual((Period('2011-01', freq=freq) - p).ordinal,
+ tslib.iNaT)
+
+ def test_period_ops_offset(self):
+ p = Period('2011-04-01', freq='D')
+ result = p + offsets.Day()
+ exp = pd.Period('2011-04-02', freq='D')
+ self.assertEqual(result, exp)
- def test_pi_ops_nat(self):
- idx = PeriodIndex(['2011-01', '2011-02', 'NaT',
+ result = p - offsets.Day(2)
+ exp = pd.Period('2011-03-30', freq='D')
+ self.assertEqual(result, exp)
+
+ msg = "Input cannot be converted to Period\(freq=D\)"
+ with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg):
+ p + offsets.Hour(2)
+
+ with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg):
+ p - offsets.Hour(2)
+
+
+class TestPeriodIndexSeriesMethods(tm.TestCase):
+ """ Test PeriodIndex and Period Series Ops consistency """
+
+ def _check(self, values, func, expected):
+ idx = pd.PeriodIndex(values)
+ result = func(idx)
+ tm.assert_index_equal(result, pd.PeriodIndex(expected))
+
+ s = pd.Series(values)
+ result = func(s)
+
+ exp = pd.Series(expected)
+ # Period(NaT) != Period(NaT)
+
+ lmask = result.map(lambda x: x.ordinal != tslib.iNaT)
+ rmask = exp.map(lambda x: x.ordinal != tslib.iNaT)
+ tm.assert_series_equal(lmask, rmask)
+ tm.assert_series_equal(result[lmask], exp[rmask])
+
+ def test_pi_ops(self):
+ idx = PeriodIndex(['2011-01', '2011-02', '2011-03',
'2011-04'], freq='M', name='idx')
- result = idx + 2
- expected = PeriodIndex(
- ['2011-03', '2011-04', 'NaT', '2011-06'], freq='M', name='idx')
- self.assertTrue(result.equals(expected))
- result2 = result - 2
- self.assertTrue(result2.equals(idx))
+ expected = PeriodIndex(['2011-03', '2011-04',
+ '2011-05', '2011-06'], freq='M', name='idx')
+ self._check(idx, lambda x: x + 2, expected)
+ self._check(idx, lambda x: 2 + x, expected)
+
+ self._check(idx + 2, lambda x: x - 2, idx)
+ result = idx - Period('2011-01', freq='M')
+ exp = pd.Index([0, 1, 2, 3], name='idx')
+ tm.assert_index_equal(result, exp)
+
+ result = Period('2011-01', freq='M') - idx
+ exp = pd.Index([0, -1, -2, -3], name='idx')
+ tm.assert_index_equal(result, exp)
+
+ def test_pi_ops_errors(self):
+ idx = PeriodIndex(['2011-01', '2011-02', '2011-03',
+ '2011-04'], freq='M', name='idx')
+ s = pd.Series(idx)
msg = "unsupported operand type\(s\)"
- with tm.assertRaisesRegexp(TypeError, msg):
- idx + "str"
+ for obj in [idx, s]:
+ for ng in ["str", 1.5]:
+ with tm.assertRaisesRegexp(TypeError, msg):
+ obj + ng
+
+ with tm.assertRaises(TypeError):
+ # error message differs between PY2 and 3
+ ng + obj
- def test_pi_ops_array(self):
+ with tm.assertRaisesRegexp(TypeError, msg):
+ obj - ng
+
+ def test_pi_ops_nat(self):
idx = PeriodIndex(['2011-01', '2011-02', 'NaT',
'2011-04'], freq='M', name='idx')
- result = idx + np.array([1, 2, 3, 4])
+ expected = PeriodIndex(['2011-03', '2011-04',
+ 'NaT', '2011-06'], freq='M', name='idx')
+ self._check(idx, lambda x: x + 2, expected)
+ self._check(idx, lambda x: 2 + x, expected)
+
+ self._check(idx + 2, lambda x: x - 2, idx)
+
+ def test_pi_ops_array_int(self):
+ idx = PeriodIndex(['2011-01', '2011-02', 'NaT',
+ '2011-04'], freq='M', name='idx')
+ f = lambda x: x + np.array([1, 2, 3, 4])
exp = PeriodIndex(['2011-02', '2011-04', 'NaT',
'2011-08'], freq='M', name='idx')
- self.assert_index_equal(result, exp)
+ self._check(idx, f, exp)
- result = np.add(idx, np.array([4, -1, 1, 2]))
+ f = lambda x: np.add(x, np.array([4, -1, 1, 2]))
exp = PeriodIndex(['2011-05', '2011-01', 'NaT',
'2011-06'], freq='M', name='idx')
- self.assert_index_equal(result, exp)
+ self._check(idx, f, exp)
- result = idx - np.array([1, 2, 3, 4])
+ f = lambda x: x - np.array([1, 2, 3, 4])
exp = PeriodIndex(['2010-12', '2010-12', 'NaT',
'2010-12'], freq='M', name='idx')
- self.assert_index_equal(result, exp)
+ self._check(idx, f, exp)
- result = np.subtract(idx, np.array([3, 2, 3, -2]))
+ f = lambda x: np.subtract(x, np.array([3, 2, 3, -2]))
exp = PeriodIndex(['2010-10', '2010-12', 'NaT',
'2011-06'], freq='M', name='idx')
- self.assert_index_equal(result, exp)
-
- # incompatible freq
- msg = "Input has different freq from PeriodIndex\(freq=M\)"
- with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg):
- idx + np.array([np.timedelta64(1, 'D')] * 4)
-
- idx = PeriodIndex(['2011-01-01 09:00', '2011-01-01 10:00', 'NaT',
- '2011-01-01 12:00'], freq='H', name='idx')
- result = idx + np.array([np.timedelta64(1, 'D')] * 4)
- exp = PeriodIndex(['2011-01-02 09:00', '2011-01-02 10:00', 'NaT',
- '2011-01-02 12:00'], freq='H', name='idx')
- self.assert_index_equal(result, exp)
-
- result = idx - np.array([np.timedelta64(1, 'h')] * 4)
- exp = PeriodIndex(['2011-01-01 08:00', '2011-01-01 09:00', 'NaT',
- '2011-01-01 11:00'], freq='H', name='idx')
- self.assert_index_equal(result, exp)
+ self._check(idx, f, exp)
+
+ def test_pi_ops_offset(self):
+ idx = PeriodIndex(['2011-01-01', '2011-02-01', '2011-03-01',
+ '2011-04-01'], freq='D', name='idx')
+ f = lambda x: x + offsets.Day()
+ exp = PeriodIndex(['2011-01-02', '2011-02-02', '2011-03-02',
+ '2011-04-02'], freq='D', name='idx')
+ self._check(idx, f, exp)
+
+ f = lambda x: x + offsets.Day(2)
+ exp = PeriodIndex(['2011-01-03', '2011-02-03', '2011-03-03',
+ '2011-04-03'], freq='D', name='idx')
+ self._check(idx, f, exp)
+
+ f = lambda x: x - offsets.Day(2)
+ exp = PeriodIndex(['2010-12-30', '2011-01-30', '2011-02-27',
+ '2011-03-30'], freq='D', name='idx')
+ self._check(idx, f, exp)
+
+ def test_pi_offset_errors(self):
+ idx = PeriodIndex(['2011-01-01', '2011-02-01', '2011-03-01',
+ '2011-04-01'], freq='D', name='idx')
+ s = pd.Series(idx)
+
+ # Series op is applied per Period instance, thus error is raised
+ # from Period
+ msg_idx = "Input has different freq from PeriodIndex\(freq=D\)"
+ msg_s = "Input cannot be converted to Period\(freq=D\)"
+ for obj, msg in [(idx, msg_idx), (s, msg_s)]:
+ with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg):
+ obj + offsets.Hour(2)
- msg = "Input has different freq from PeriodIndex\(freq=H\)"
- with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg):
- idx + np.array([np.timedelta64(1, 's')] * 4)
+ with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg):
+ offsets.Hour(2) + obj
- idx = PeriodIndex(['2011-01-01 09:00:00', '2011-01-01 10:00:00', 'NaT',
- '2011-01-01 12:00:00'], freq='S', name='idx')
- result = idx + np.array([np.timedelta64(1, 'h'), np.timedelta64(
- 30, 's'), np.timedelta64(2, 'h'), np.timedelta64(15, 'm')])
- exp = PeriodIndex(['2011-01-01 10:00:00', '2011-01-01 10:00:30', 'NaT',
- '2011-01-01 12:15:00'], freq='S', name='idx')
- self.assert_index_equal(result, exp)
+ with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg):
+ obj - offsets.Hour(2)
def test_pi_sub_period(self):
# GH 13071
@@ -3903,7 +3994,7 @@ def test_equal(self):
self.assertEqual(self.january1, self.january2)
def test_equal_Raises_Value(self):
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
self.january1 == self.day
def test_notEqual(self):
@@ -3914,7 +4005,7 @@ def test_greater(self):
self.assertTrue(self.february > self.january1)
def test_greater_Raises_Value(self):
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
self.january1 > self.day
def test_greater_Raises_Type(self):
@@ -3925,8 +4016,9 @@ def test_greaterEqual(self):
self.assertTrue(self.january1 >= self.january2)
def test_greaterEqual_Raises_Value(self):
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
self.january1 >= self.day
+
with tm.assertRaises(TypeError):
print(self.january1 >= 1)
@@ -3934,7 +4026,7 @@ def test_smallerEqual(self):
self.assertTrue(self.january1 <= self.january2)
def test_smallerEqual_Raises_Value(self):
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
self.january1 <= self.day
def test_smallerEqual_Raises_Type(self):
@@ -3945,7 +4037,7 @@ def test_smaller(self):
self.assertTrue(self.january1 < self.february)
def test_smaller_Raises_Value(self):
- with tm.assertRaises(ValueError):
+ with tm.assertRaises(period.IncompatibleFrequency):
self.january1 < self.day
def test_smaller_Raises_Type(self):
@@ -4033,7 +4125,7 @@ def test_pi_pi_comp(self):
with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg):
Period('2011', freq='A') >= base
- with tm.assertRaisesRegexp(ValueError, msg):
+ with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg):
idx = PeriodIndex(['2011', '2012', '2013', '2014'], freq='A')
base <= idx
diff --git a/pandas/tseries/tests/test_resample.py b/pandas/tseries/tests/test_resample.py
index 37b16684643be..8e6341c6b7cc3 100644
--- a/pandas/tseries/tests/test_resample.py
+++ b/pandas/tseries/tests/test_resample.py
@@ -637,13 +637,13 @@ def test_resample_empty_series(self):
methods = [method for method in resample_methods
if method != 'ohlc']
for method in methods:
- expected_index = s.index._shallow_copy(freq=freq)
-
result = getattr(s.resample(freq), method)()
- expected = s
- assert_index_equal(result.index, expected_index)
- # freq equality not yet checked in assert_index_equal
- self.assertEqual(result.index.freq, expected_index.freq)
+
+ expected = s.copy()
+ expected.index = s.index._shallow_copy(freq=freq)
+ assert_index_equal(result.index, expected.index)
+ self.assertEqual(result.index.freq, expected.index.freq)
+
if (method == 'size' and
isinstance(result.index, PeriodIndex) and
freq in ['M', 'D']):
@@ -665,13 +665,12 @@ def test_resample_empty_dataframe(self):
# count retains dimensions too
methods = downsample_methods + ['count']
for method in methods:
- expected_index = f.index._shallow_copy(freq=freq)
result = getattr(f.resample(freq), method)()
- expected = f
- assert_index_equal(result.index, expected_index)
- # freq equality not yet checked in assert_index_equal
- # TODO: remove when freq checked
- self.assertEqual(result.index.freq, expected_index.freq)
+
+ expected = f.copy()
+ expected.index = f.index._shallow_copy(freq=freq)
+ assert_index_equal(result.index, expected.index)
+ self.assertEqual(result.index.freq, expected.index.freq)
assert_frame_equal(result, expected, check_dtype=False)
# test size for GH13212 (currently stays as df)
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 8682302b542be..73063675ebfec 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -751,6 +751,8 @@ def _get_ilevel_values(index, level):
# metadata comparison
if check_names:
assert_attr_equal('names', left, right, obj=obj)
+ if isinstance(left, pd.PeriodIndex) or isinstance(right, pd.PeriodIndex):
+ assert_attr_equal('freq', left, right, obj=obj)
def assert_class_equal(left, right, exact=True, obj='Input'):
| - [x] related to #13242
closes #13251
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
This must be after #13079.
| https://api.github.com/repos/pandas-dev/pandas/pulls/13250 | 2016-05-21T13:48:54Z | 2016-05-22T00:05:22Z | null | 2016-05-22T00:25:24Z |
TST: check internal Categorical | diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt
index a77bdcec2ce7a..4b3c96da10efd 100644
--- a/doc/source/whatsnew/v0.18.2.txt
+++ b/doc/source/whatsnew/v0.18.2.txt
@@ -180,7 +180,7 @@ Bug Fixes
- Bug in ``Period`` addition raises ``TypeError`` if ``Period`` is on right hand side (:issue:`13069`)
- Bug in ``Peirod`` and ``Series`` or ``Index`` comparison raises ``TypeError`` (:issue:`13200`)
- Bug in ``pd.set_eng_float_format()`` that would prevent NaN's from formatting (:issue:`11981`)
-
+- Bug in ``.unstack`` with ``Categorical`` dtype resets ``.ordered`` to ``True`` (:issue:`13249`)
- Bug in ``groupby`` where ``apply`` returns different result depending on whether first result is ``None`` or not (:issue:`12824`)
diff --git a/pandas/core/reshape.py b/pandas/core/reshape.py
index 7e0c094aec4c2..8d237016d1b33 100644
--- a/pandas/core/reshape.py
+++ b/pandas/core/reshape.py
@@ -162,9 +162,12 @@ def get_result(self):
# may need to coerce categoricals here
if self.is_categorical is not None:
- values = [Categorical.from_array(
- values[:, i], categories=self.is_categorical.categories,
- ordered=True) for i in range(values.shape[-1])]
+ categories = self.is_categorical.categories
+ ordered = self.is_categorical.ordered
+ values = [Categorical.from_array(values[:, i],
+ categories=categories,
+ ordered=ordered)
+ for i in range(values.shape[-1])]
return DataFrame(values, index=index, columns=columns)
diff --git a/pandas/io/tests/test_pickle.py b/pandas/io/tests/test_pickle.py
index 4ff0363d07df6..7f2813d5281cb 100644
--- a/pandas/io/tests/test_pickle.py
+++ b/pandas/io/tests/test_pickle.py
@@ -108,6 +108,13 @@ def compare_series_dt_tz(self, result, expected, typ, version):
else:
tm.assert_series_equal(result, expected)
+ def compare_series_cat(self, result, expected, typ, version):
+ # Categorical.ordered is changed in < 0.16.0
+ if LooseVersion(version) < '0.16.0':
+ tm.assert_series_equal(result, expected, check_categorical=False)
+ else:
+ tm.assert_series_equal(result, expected)
+
def compare_frame_dt_mixed_tzs(self, result, expected, typ, version):
# 8260
# dtype is object < 0.17.0
@@ -117,6 +124,16 @@ def compare_frame_dt_mixed_tzs(self, result, expected, typ, version):
else:
tm.assert_frame_equal(result, expected)
+ def compare_frame_cat_onecol(self, result, expected, typ, version):
+ # Categorical.ordered is changed in < 0.16.0
+ if LooseVersion(version) < '0.16.0':
+ tm.assert_frame_equal(result, expected, check_categorical=False)
+ else:
+ tm.assert_frame_equal(result, expected)
+
+ def compare_frame_cat_and_float(self, result, expected, typ, version):
+ self.compare_frame_cat_onecol(result, expected, typ, version)
+
def compare_index_period(self, result, expected, typ, version):
tm.assert_index_equal(result, expected)
tm.assertIsInstance(result.freq, MonthEnd)
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 6bf0175526424..5ee84ce97979a 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -1004,7 +1004,7 @@ def roundtrip(s, key='data', encoding='latin-1', nan_rep=''):
nan_rep=nan_rep)
retr = read_hdf(store, key)
s_nan = s.replace(nan_rep, np.nan)
- assert_series_equal(s_nan, retr)
+ assert_series_equal(s_nan, retr, check_categorical=False)
for s in examples:
roundtrip(s)
diff --git a/pandas/io/tests/test_stata.py b/pandas/io/tests/test_stata.py
index fe782bb86d1be..17f74d5789298 100644
--- a/pandas/io/tests/test_stata.py
+++ b/pandas/io/tests/test_stata.py
@@ -234,10 +234,11 @@ def test_read_dta4(self):
expected = pd.concat([expected[col].astype('category')
for col in expected], axis=1)
- tm.assert_frame_equal(parsed_113, expected)
- tm.assert_frame_equal(parsed_114, expected)
- tm.assert_frame_equal(parsed_115, expected)
- tm.assert_frame_equal(parsed_117, expected)
+ # stata doesn't save .category metadata
+ tm.assert_frame_equal(parsed_113, expected, check_categorical=False)
+ tm.assert_frame_equal(parsed_114, expected, check_categorical=False)
+ tm.assert_frame_equal(parsed_115, expected, check_categorical=False)
+ tm.assert_frame_equal(parsed_117, expected, check_categorical=False)
# File containing strls
def test_read_dta12(self):
@@ -872,8 +873,8 @@ def test_categorical_writing(self):
# Silence warnings
original.to_stata(path)
written_and_read_again = self.read_dta(path)
- tm.assert_frame_equal(
- written_and_read_again.set_index('index'), expected)
+ res = written_and_read_again.set_index('index')
+ tm.assert_frame_equal(res, expected, check_categorical=False)
def test_categorical_warnings_and_errors(self):
# Warning for non-string labels
@@ -915,8 +916,8 @@ def test_categorical_with_stata_missing_values(self):
with tm.ensure_clean() as path:
original.to_stata(path)
written_and_read_again = self.read_dta(path)
- tm.assert_frame_equal(
- written_and_read_again.set_index('index'), original)
+ res = written_and_read_again.set_index('index')
+ tm.assert_frame_equal(res, original, check_categorical=False)
def test_categorical_order(self):
# Directly construct using expected codes
@@ -945,8 +946,8 @@ def test_categorical_order(self):
# Read with and with out categoricals, ensure order is identical
parsed_115 = read_stata(self.dta19_115)
parsed_117 = read_stata(self.dta19_117)
- tm.assert_frame_equal(expected, parsed_115)
- tm.assert_frame_equal(expected, parsed_117)
+ tm.assert_frame_equal(expected, parsed_115, check_categorical=False)
+ tm.assert_frame_equal(expected, parsed_117, check_categorical=False)
# Check identity of codes
for col in expected:
@@ -969,8 +970,10 @@ def test_categorical_sorting(self):
categories = ["Poor", "Fair", "Good", "Very good", "Excellent"]
cat = pd.Categorical.from_codes(codes=codes, categories=categories)
expected = pd.Series(cat, name='srh')
- tm.assert_series_equal(expected, parsed_115["srh"])
- tm.assert_series_equal(expected, parsed_117["srh"])
+ tm.assert_series_equal(expected, parsed_115["srh"],
+ check_categorical=False)
+ tm.assert_series_equal(expected, parsed_117["srh"],
+ check_categorical=False)
def test_categorical_ordering(self):
parsed_115 = read_stata(self.dta19_115)
@@ -1021,7 +1024,8 @@ def test_read_chunks_117(self):
from_frame = parsed.iloc[pos:pos + chunksize, :]
tm.assert_frame_equal(
from_frame, chunk, check_dtype=False,
- check_datetimelike_compat=True)
+ check_datetimelike_compat=True,
+ check_categorical=False)
pos += chunksize
itr.close()
@@ -1087,7 +1091,8 @@ def test_read_chunks_115(self):
from_frame = parsed.iloc[pos:pos + chunksize, :]
tm.assert_frame_equal(
from_frame, chunk, check_dtype=False,
- check_datetimelike_compat=True)
+ check_datetimelike_compat=True,
+ check_categorical=False)
pos += chunksize
itr.close()
diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py
index e7d64324e6590..43c288162b134 100644
--- a/pandas/tests/frame/test_reshape.py
+++ b/pandas/tests/frame/test_reshape.py
@@ -158,6 +158,8 @@ def test_unstack_fill(self):
index=['x', 'y', 'z'], dtype=np.float)
assert_frame_equal(result, expected)
+ def test_unstack_fill_frame(self):
+
# From a dataframe
rows = [[1, 2], [3, 4], [5, 6], [7, 8]]
df = DataFrame(rows, columns=list('AB'), dtype=np.int32)
@@ -190,6 +192,8 @@ def test_unstack_fill(self):
[('A', 'a'), ('A', 'b'), ('B', 'a'), ('B', 'b')])
assert_frame_equal(result, expected)
+ def test_unstack_fill_frame_datetime(self):
+
# Test unstacking with date times
dv = pd.date_range('2012-01-01', periods=4).values
data = Series(dv)
@@ -208,6 +212,8 @@ def test_unstack_fill(self):
index=['x', 'y', 'z'])
assert_frame_equal(result, expected)
+ def test_unstack_fill_frame_timedelta(self):
+
# Test unstacking with time deltas
td = [Timedelta(days=i) for i in range(4)]
data = Series(td)
@@ -226,6 +232,8 @@ def test_unstack_fill(self):
index=['x', 'y', 'z'])
assert_frame_equal(result, expected)
+ def test_unstack_fill_frame_period(self):
+
# Test unstacking with period
periods = [Period('2012-01'), Period('2012-02'), Period('2012-03'),
Period('2012-04')]
@@ -245,6 +253,8 @@ def test_unstack_fill(self):
index=['x', 'y', 'z'])
assert_frame_equal(result, expected)
+ def test_unstack_fill_frame_categorical(self):
+
# Test unstacking with categorical
data = pd.Series(['a', 'b', 'c', 'a'], dtype='category')
data.index = pd.MultiIndex.from_tuples(
@@ -273,27 +283,20 @@ def test_unstack_fill(self):
assert_frame_equal(result, expected)
def test_stack_ints(self):
- df = DataFrame(
- np.random.randn(30, 27),
- columns=MultiIndex.from_tuples(
- list(itertools.product(range(3), repeat=3))
- )
- )
- assert_frame_equal(
- df.stack(level=[1, 2]),
- df.stack(level=1).stack(level=1)
- )
- assert_frame_equal(
- df.stack(level=[-2, -1]),
- df.stack(level=1).stack(level=1)
- )
+ columns = MultiIndex.from_tuples(list(itertools.product(range(3),
+ repeat=3)))
+ df = DataFrame(np.random.randn(30, 27), columns=columns)
+
+ assert_frame_equal(df.stack(level=[1, 2]),
+ df.stack(level=1).stack(level=1))
+ assert_frame_equal(df.stack(level=[-2, -1]),
+ df.stack(level=1).stack(level=1))
df_named = df.copy()
df_named.columns.set_names(range(3), inplace=True)
- assert_frame_equal(
- df_named.stack(level=[1, 2]),
- df_named.stack(level=1).stack(level=1)
- )
+
+ assert_frame_equal(df_named.stack(level=[1, 2]),
+ df_named.stack(level=1).stack(level=1))
def test_stack_mixed_levels(self):
columns = MultiIndex.from_tuples(
diff --git a/pandas/tests/indexing/test_categorical.py b/pandas/tests/indexing/test_categorical.py
index 53ab9aca03f6c..2cb62a60f885b 100644
--- a/pandas/tests/indexing/test_categorical.py
+++ b/pandas/tests/indexing/test_categorical.py
@@ -108,15 +108,17 @@ def test_loc_listlike_dtypes(self):
# unique slice
res = df.loc[['a', 'b']]
- exp = DataFrame({'A': [1, 2],
- 'B': [4, 5]}, index=pd.CategoricalIndex(['a', 'b']))
+ exp_index = pd.CategoricalIndex(['a', 'b'],
+ categories=index.categories)
+ exp = DataFrame({'A': [1, 2], 'B': [4, 5]}, index=exp_index)
tm.assert_frame_equal(res, exp, check_index_type=True)
# duplicated slice
res = df.loc[['a', 'a', 'b']]
- exp = DataFrame({'A': [1, 1, 2],
- 'B': [4, 4, 5]},
- index=pd.CategoricalIndex(['a', 'a', 'b']))
+
+ exp_index = pd.CategoricalIndex(['a', 'a', 'b'],
+ categories=index.categories)
+ exp = DataFrame({'A': [1, 1, 2], 'B': [4, 4, 5]}, index=exp_index)
tm.assert_frame_equal(res, exp, check_index_type=True)
with tm.assertRaisesRegexp(
@@ -194,12 +196,15 @@ def test_ix_categorical_index(self):
expect = pd.Series(df.ix[:, 'X'], index=cdf.index, name='X')
assert_series_equal(cdf.ix[:, 'X'], expect)
+ exp_index = pd.CategoricalIndex(list('AB'), categories=['A', 'B', 'C'])
expect = pd.DataFrame(df.ix[['A', 'B'], :], columns=cdf.columns,
- index=pd.CategoricalIndex(list('AB')))
+ index=exp_index)
assert_frame_equal(cdf.ix[['A', 'B'], :], expect)
+ exp_columns = pd.CategoricalIndex(list('XY'),
+ categories=['X', 'Y', 'Z'])
expect = pd.DataFrame(df.ix[:, ['X', 'Y']], index=cdf.index,
- columns=pd.CategoricalIndex(list('XY')))
+ columns=exp_columns)
assert_frame_equal(cdf.ix[:, ['X', 'Y']], expect)
# non-unique
@@ -209,12 +214,14 @@ def test_ix_categorical_index(self):
cdf.index = pd.CategoricalIndex(df.index)
cdf.columns = pd.CategoricalIndex(df.columns)
+ exp_index = pd.CategoricalIndex(list('AA'), categories=['A', 'B'])
expect = pd.DataFrame(df.ix['A', :], columns=cdf.columns,
- index=pd.CategoricalIndex(list('AA')))
+ index=exp_index)
assert_frame_equal(cdf.ix['A', :], expect)
+ exp_columns = pd.CategoricalIndex(list('XX'), categories=['X', 'Y'])
expect = pd.DataFrame(df.ix[:, 'X'], index=cdf.index,
- columns=pd.CategoricalIndex(list('XX')))
+ columns=exp_columns)
assert_frame_equal(cdf.ix[:, 'X'], expect)
expect = pd.DataFrame(df.ix[['A', 'B'], :], columns=cdf.columns,
diff --git a/pandas/tests/series/test_apply.py b/pandas/tests/series/test_apply.py
index 6e0a0175b403f..9cb1e9dd93d16 100644
--- a/pandas/tests/series/test_apply.py
+++ b/pandas/tests/series/test_apply.py
@@ -187,7 +187,8 @@ def test_map(self):
index=pd.CategoricalIndex(['b', 'c', 'd', 'e']))
c = Series(['B', 'C', 'D', 'E'], index=Index(['b', 'c', 'd', 'e']))
- exp = Series([np.nan, 'B', 'C', 'D'], dtype='category')
+ exp = Series(pd.Categorical([np.nan, 'B', 'C', 'D'],
+ categories=['B', 'C', 'D', 'E']))
self.assert_series_equal(a.map(b), exp)
exp = Series([np.nan, 'B', 'C', 'D'])
self.assert_series_equal(a.map(c), exp)
diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py
index 5a6667e57ce9d..40ef5354e91bd 100644
--- a/pandas/tests/test_categorical.py
+++ b/pandas/tests/test_categorical.py
@@ -556,28 +556,35 @@ def test_categories_none(self):
def test_describe(self):
# string type
desc = self.factor.describe()
+ self.assertTrue(self.factor.ordered)
+ exp_index = pd.CategoricalIndex(['a', 'b', 'c'], name='categories',
+ ordered=self.factor.ordered)
expected = DataFrame({'counts': [3, 2, 3],
'freqs': [3 / 8., 2 / 8., 3 / 8.]},
- index=pd.CategoricalIndex(['a', 'b', 'c'],
- name='categories'))
+ index=exp_index)
tm.assert_frame_equal(desc, expected)
# check unused categories
cat = self.factor.copy()
cat.set_categories(["a", "b", "c", "d"], inplace=True)
desc = cat.describe()
+
+ exp_index = pd.CategoricalIndex(['a', 'b', 'c', 'd'],
+ ordered=self.factor.ordered,
+ name='categories')
expected = DataFrame({'counts': [3, 2, 3, 0],
'freqs': [3 / 8., 2 / 8., 3 / 8., 0]},
- index=pd.CategoricalIndex(['a', 'b', 'c', 'd'],
- name='categories'))
+ index=exp_index)
tm.assert_frame_equal(desc, expected)
# check an integer one
- desc = Categorical([1, 2, 3, 1, 2, 3, 3, 2, 1, 1, 1]).describe()
+ cat = Categorical([1, 2, 3, 1, 2, 3, 3, 2, 1, 1, 1])
+ desc = cat.describe()
+ exp_index = pd.CategoricalIndex([1, 2, 3], ordered=cat.ordered,
+ name='categories')
expected = DataFrame({'counts': [5, 3, 3],
'freqs': [5 / 11., 3 / 11., 3 / 11.]},
- index=pd.CategoricalIndex([1, 2, 3],
- name='categories'))
+ index=exp_index)
tm.assert_frame_equal(desc, expected)
# https://github.com/pydata/pandas/issues/3678
@@ -601,7 +608,7 @@ def test_describe(self):
columns=['counts', 'freqs'],
index=pd.CategoricalIndex(['b', 'a', 'c', np.nan],
name='categories'))
- tm.assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected, check_categorical=False)
# NA as an unused category
with tm.assert_produces_warning(FutureWarning):
@@ -613,7 +620,7 @@ def test_describe(self):
['b', 'a', 'c', np.nan], name='categories')
expected = DataFrame([[0, 0], [1, 1 / 3.], [2, 2 / 3.], [0, 0]],
columns=['counts', 'freqs'], index=exp_idx)
- tm.assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected, check_categorical=False)
def test_print(self):
expected = ["[a, b, b, a, a, c, c, c]",
@@ -2885,13 +2892,17 @@ def test_value_counts(self):
categories=["c", "a", "b", "d"])
s = pd.Series(cats, name='xxx')
res = s.value_counts(sort=False)
- exp = Series([3, 1, 2, 0], name='xxx',
- index=pd.CategoricalIndex(["c", "a", "b", "d"]))
+
+ exp_index = pd.CategoricalIndex(["c", "a", "b", "d"],
+ categories=cats.categories)
+ exp = Series([3, 1, 2, 0], name='xxx', index=exp_index)
tm.assert_series_equal(res, exp)
res = s.value_counts(sort=True)
- exp = Series([3, 2, 1, 0], name='xxx',
- index=pd.CategoricalIndex(["c", "b", "a", "d"]))
+
+ exp_index = pd.CategoricalIndex(["c", "b", "a", "d"],
+ categories=cats.categories)
+ exp = Series([3, 2, 1, 0], name='xxx', index=exp_index)
tm.assert_series_equal(res, exp)
# check object dtype handles the Series.name as the same
@@ -2927,38 +2938,39 @@ def test_value_counts_with_nan(self):
index=pd.CategoricalIndex(["a", "b", np.nan])))
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
- s = pd.Series(pd.Categorical(
- ["a", "b", "a"], categories=["a", "b", np.nan]))
- tm.assert_series_equal(
- s.value_counts(dropna=True),
- pd.Series([2, 1], index=pd.CategoricalIndex(["a", "b"])))
- tm.assert_series_equal(
- s.value_counts(dropna=False),
- pd.Series([2, 1, 0],
- index=pd.CategoricalIndex(["a", "b", np.nan])))
+ s = pd.Series(pd.Categorical(["a", "b", "a"],
+ categories=["a", "b", np.nan]))
+
+ # internal categories are different because of NaN
+ exp = pd.Series([2, 1], index=pd.CategoricalIndex(["a", "b"]))
+ tm.assert_series_equal(s.value_counts(dropna=True), exp,
+ check_categorical=False)
+ exp = pd.Series([2, 1, 0],
+ index=pd.CategoricalIndex(["a", "b", np.nan]))
+ tm.assert_series_equal(s.value_counts(dropna=False), exp,
+ check_categorical=False)
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
- s = pd.Series(pd.Categorical(
- ["a", "b", None, "a", None, None], categories=["a", "b", np.nan
- ]))
- tm.assert_series_equal(
- s.value_counts(dropna=True),
- pd.Series([2, 1], index=pd.CategoricalIndex(["a", "b"])))
- tm.assert_series_equal(
- s.value_counts(dropna=False),
- pd.Series([3, 2, 1],
- index=pd.CategoricalIndex([np.nan, "a", "b"])))
+ s = pd.Series(pd.Categorical(["a", "b", None, "a", None, None],
+ categories=["a", "b", np.nan]))
+
+ exp = pd.Series([2, 1], index=pd.CategoricalIndex(["a", "b"]))
+ tm.assert_series_equal(s.value_counts(dropna=True), exp,
+ check_categorical=False)
+ exp = pd.Series([3, 2, 1],
+ index=pd.CategoricalIndex([np.nan, "a", "b"]))
+ tm.assert_series_equal(s.value_counts(dropna=False), exp,
+ check_categorical=False)
def test_groupby(self):
- cats = Categorical(
- ["a", "a", "a", "b", "b", "b", "c", "c", "c"
- ], categories=["a", "b", "c", "d"], ordered=True)
+ cats = Categorical(["a", "a", "a", "b", "b", "b", "c", "c", "c"],
+ categories=["a", "b", "c", "d"], ordered=True)
data = DataFrame({"a": [1, 1, 1, 2, 2, 2, 3, 4, 5], "b": cats})
- expected = DataFrame({'a': Series(
- [1, 2, 4, np.nan], index=pd.CategoricalIndex(
- ['a', 'b', 'c', 'd'], name='b'))})
+ exp_index = pd.CategoricalIndex(['a', 'b', 'c', 'd'], name='b',
+ ordered=True)
+ expected = DataFrame({'a': [1, 2, 4, np.nan]}, index=exp_index)
result = data.groupby("b").mean()
tm.assert_frame_equal(result, expected)
@@ -2970,17 +2982,19 @@ def test_groupby(self):
# single grouper
gb = df.groupby("A")
- exp_idx = pd.CategoricalIndex(['a', 'b', 'z'], name='A')
+ exp_idx = pd.CategoricalIndex(['a', 'b', 'z'], name='A', ordered=True)
expected = DataFrame({'values': Series([3, 7, np.nan], index=exp_idx)})
result = gb.sum()
tm.assert_frame_equal(result, expected)
# multiple groupers
gb = df.groupby(['A', 'B'])
- expected = DataFrame({'values': Series(
- [1, 2, np.nan, 3, 4, np.nan, np.nan, np.nan, np.nan
- ], index=pd.MultiIndex.from_product(
- [['a', 'b', 'z'], ['c', 'd', 'y']], names=['A', 'B']))})
+ exp_index = pd.MultiIndex.from_product([['a', 'b', 'z'],
+ ['c', 'd', 'y']],
+ names=['A', 'B'])
+ expected = DataFrame({'values': [1, 2, np.nan, 3, 4, np.nan,
+ np.nan, np.nan, np.nan]},
+ index=exp_index)
result = gb.sum()
tm.assert_frame_equal(result, expected)
@@ -3054,8 +3068,10 @@ def f(x):
df = pd.DataFrame({'a': [1, 0, 0, 0]})
c = pd.cut(df.a, [0, 1, 2, 3, 4])
result = df.groupby(c).apply(len)
- expected = pd.Series([1, 0, 0, 0],
- index=pd.CategoricalIndex(c.values.categories))
+
+ exp_index = pd.CategoricalIndex(c.values.categories,
+ ordered=c.values.ordered)
+ expected = pd.Series([1, 0, 0, 0], index=exp_index)
expected.index.name = 'a'
tm.assert_series_equal(result, expected)
@@ -3369,30 +3385,28 @@ def test_assigning_ops(self):
# assign a part of a column with dtype != categorical ->
# exp_parts_cats_col
- cats = pd.Categorical(
- ["a", "a", "a", "a", "a", "a", "a"], categories=["a", "b"])
+ cats = pd.Categorical(["a", "a", "a", "a", "a", "a", "a"],
+ categories=["a", "b"])
idx = pd.Index(["h", "i", "j", "k", "l", "m", "n"])
values = [1, 1, 1, 1, 1, 1, 1]
orig = pd.DataFrame({"cats": cats, "values": values}, index=idx)
# the expected values
# changed single row
- cats1 = pd.Categorical(
- ["a", "a", "b", "a", "a", "a", "a"], categories=["a", "b"])
+ cats1 = pd.Categorical(["a", "a", "b", "a", "a", "a", "a"],
+ categories=["a", "b"])
idx1 = pd.Index(["h", "i", "j", "k", "l", "m", "n"])
values1 = [1, 1, 2, 1, 1, 1, 1]
- exp_single_row = pd.DataFrame(
- {"cats": cats1,
- "values": values1}, index=idx1)
+ exp_single_row = pd.DataFrame({"cats": cats1,
+ "values": values1}, index=idx1)
# changed multiple rows
- cats2 = pd.Categorical(
- ["a", "a", "b", "b", "a", "a", "a"], categories=["a", "b"])
+ cats2 = pd.Categorical(["a", "a", "b", "b", "a", "a", "a"],
+ categories=["a", "b"])
idx2 = pd.Index(["h", "i", "j", "k", "l", "m", "n"])
values2 = [1, 1, 2, 2, 1, 1, 1]
- exp_multi_row = pd.DataFrame(
- {"cats": cats2,
- "values": values2}, index=idx2)
+ exp_multi_row = pd.DataFrame({"cats": cats2,
+ "values": values2}, index=idx2)
# changed part of the cats column
cats3 = pd.Categorical(
@@ -3653,7 +3667,8 @@ def f():
exp_fancy["cats"].cat.set_categories(["a", "b", "c"], inplace=True)
df[df["cats"] == "c"] = ["b", 2]
- tm.assert_frame_equal(df, exp_multi_row)
+ # category c is kept in .categories
+ tm.assert_frame_equal(df, exp_fancy)
# set_value
df = orig.copy()
@@ -3708,7 +3723,7 @@ def f():
# ensure that one can set something to np.nan
s = Series(Categorical([1, 2, 3]))
- exp = Series(Categorical([1, np.nan, 3]))
+ exp = Series(Categorical([1, np.nan, 3], categories=[1, 2, 3]))
s[1] = np.nan
tm.assert_series_equal(s, exp)
@@ -4083,10 +4098,12 @@ def f():
c = Categorical(["a", "b", np.nan])
with tm.assert_produces_warning(FutureWarning):
c.set_categories(["a", "b", np.nan], rename=True, inplace=True)
+
c[0] = np.nan
df = pd.DataFrame({"cats": c, "vals": [1, 2, 3]})
- df_exp = pd.DataFrame({"cats": Categorical(["a", "b", "a"]),
- "vals": [1, 2, 3]})
+
+ cat_exp = Categorical(["a", "b", "a"], categories=["a", "b", np.nan])
+ df_exp = pd.DataFrame({"cats": cat_exp, "vals": [1, 2, 3]})
res = df.fillna("a")
tm.assert_frame_equal(res, df_exp)
@@ -4128,7 +4145,9 @@ def cmp(a, b):
]:
result = valid(s)
- tm.assert_series_equal(result, s)
+ # compare series values
+ # internal .categories can't be compared because it is sorted
+ tm.assert_series_equal(result, s, check_categorical=False)
# invalid conversion (these are NOT a dtype)
for invalid in [lambda x: x.astype(pd.Categorical),
diff --git a/pandas/tests/test_generic.py b/pandas/tests/test_generic.py
index 2bad2fabcfc57..794b5e8aa5650 100644
--- a/pandas/tests/test_generic.py
+++ b/pandas/tests/test_generic.py
@@ -847,7 +847,7 @@ def test_to_xarray(self):
assert_almost_equal(list(result.coords.keys()), ['foo'])
self.assertIsInstance(result, DataArray)
- def testit(index, check_index_type=True):
+ def testit(index, check_index_type=True, check_categorical=True):
s = Series(range(6), index=index(6))
s.index.name = 'foo'
result = s.to_xarray()
@@ -859,7 +859,8 @@ def testit(index, check_index_type=True):
# idempotency
assert_series_equal(result.to_series(), s,
- check_index_type=check_index_type)
+ check_index_type=check_index_type,
+ check_categorical=check_categorical)
for index in [tm.makeFloatIndex, tm.makeIntIndex,
tm.makeStringIndex, tm.makeUnicodeIndex,
@@ -868,7 +869,8 @@ def testit(index, check_index_type=True):
testit(index)
# not idempotent
- testit(tm.makeCategoricalIndex, check_index_type=False)
+ testit(tm.makeCategoricalIndex, check_index_type=False,
+ check_categorical=False)
s = Series(range(6))
s.index.name = 'foo'
@@ -1409,9 +1411,8 @@ def test_to_xarray(self):
expected['f'] = expected['f'].astype(object)
expected['h'] = expected['h'].astype('datetime64[ns]')
expected.columns.name = None
- assert_frame_equal(result.to_dataframe(),
- expected,
- check_index_type=False)
+ assert_frame_equal(result.to_dataframe(), expected,
+ check_index_type=False, check_categorical=False)
# available in 0.7.1
# MultiIndex
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index 5dfe88d04309e..38e6a066d3eea 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -3868,8 +3868,8 @@ def test_groupby_sort_categorical(self):
['(0, 2.5]', 1, 60],
['(5, 7.5]', 7, 70]], columns=['range', 'foo', 'bar'])
df['range'] = Categorical(df['range'], ordered=True)
- index = CategoricalIndex(
- ['(0, 2.5]', '(2.5, 5]', '(5, 7.5]', '(7.5, 10]'], name='range')
+ index = CategoricalIndex(['(0, 2.5]', '(2.5, 5]', '(5, 7.5]',
+ '(7.5, 10]'], name='range', ordered=True)
result_sort = DataFrame([[1, 60], [5, 30], [6, 40], [10, 10]],
columns=['foo', 'bar'], index=index)
@@ -3879,13 +3879,15 @@ def test_groupby_sort_categorical(self):
assert_frame_equal(result_sort, df.groupby(col, sort=False).first())
df['range'] = Categorical(df['range'], ordered=False)
- index = CategoricalIndex(
- ['(0, 2.5]', '(2.5, 5]', '(5, 7.5]', '(7.5, 10]'], name='range')
+ index = CategoricalIndex(['(0, 2.5]', '(2.5, 5]', '(5, 7.5]',
+ '(7.5, 10]'], name='range')
result_sort = DataFrame([[1, 60], [5, 30], [6, 40], [10, 10]],
columns=['foo', 'bar'], index=index)
- index = CategoricalIndex(['(7.5, 10]', '(2.5, 5]',
- '(5, 7.5]', '(0, 2.5]'],
+ index = CategoricalIndex(['(7.5, 10]', '(2.5, 5]', '(5, 7.5]',
+ '(0, 2.5]'],
+ categories=['(7.5, 10]', '(2.5, 5]',
+ '(5, 7.5]', '(0, 2.5]'],
name='range')
result_nosort = DataFrame([[10, 10], [5, 30], [6, 40], [1, 60]],
index=index, columns=['foo', 'bar'])
@@ -3975,7 +3977,8 @@ def test_groupby_categorical(self):
result = data.groupby(cats).mean()
expected = data.groupby(np.asarray(cats)).mean()
- exp_idx = CategoricalIndex(levels, ordered=True)
+ exp_idx = CategoricalIndex(levels, categories=cats.categories,
+ ordered=True)
expected = expected.reindex(exp_idx)
assert_frame_equal(result, expected)
@@ -3986,14 +3989,16 @@ def test_groupby_categorical(self):
idx = cats.codes.argsort()
ord_labels = np.asarray(cats).take(idx)
ord_data = data.take(idx)
- expected = ord_data.groupby(
- Categorical(ord_labels), sort=False).describe()
+
+ exp_cats = Categorical(ord_labels, ordered=True,
+ categories=['foo', 'bar', 'baz', 'qux'])
+ expected = ord_data.groupby(exp_cats, sort=False).describe()
expected.index.names = [None, None]
assert_frame_equal(desc_result, expected)
# GH 10460
- expc = Categorical.from_codes(
- np.arange(4).repeat(8), levels, ordered=True)
+ expc = Categorical.from_codes(np.arange(4).repeat(8),
+ levels, ordered=True)
exp = CategoricalIndex(expc)
self.assert_index_equal(desc_result.index.get_level_values(0), exp)
exp = Index(['count', 'mean', 'std', 'min', '25%', '50%',
@@ -6266,8 +6271,11 @@ def test_groupby_categorical_two_columns(self):
# Grouping on a single column
groups_single_key = test.groupby("cat")
res = groups_single_key.agg('mean')
+
+ exp_index = pd.CategoricalIndex(["a", "b", "c"], name="cat",
+ ordered=True)
exp = DataFrame({"ints": [1.5, 1.5, np.nan], "val": [20, 30, np.nan]},
- index=pd.CategoricalIndex(["a", "b", "c"], name="cat"))
+ index=exp_index)
tm.assert_frame_equal(res, exp)
# Grouping on two columns
diff --git a/pandas/tests/test_reshape.py b/pandas/tests/test_reshape.py
index 862e2282bae2f..7136d7effc1fc 100644
--- a/pandas/tests/test_reshape.py
+++ b/pandas/tests/test_reshape.py
@@ -239,26 +239,16 @@ def test_just_na(self):
def test_include_na(self):
s = ['a', 'b', np.nan]
res = get_dummies(s, sparse=self.sparse)
- exp = DataFrame({'a': {0: 1.0,
- 1: 0.0,
- 2: 0.0},
- 'b': {0: 0.0,
- 1: 1.0,
- 2: 0.0}})
+ exp = DataFrame({'a': {0: 1.0, 1: 0.0, 2: 0.0},
+ 'b': {0: 0.0, 1: 1.0, 2: 0.0}})
assert_frame_equal(res, exp)
# Sparse dataframes do not allow nan labelled columns, see #GH8822
res_na = get_dummies(s, dummy_na=True, sparse=self.sparse)
- exp_na = DataFrame({nan: {0: 0.0,
- 1: 0.0,
- 2: 1.0},
- 'a': {0: 1.0,
- 1: 0.0,
- 2: 0.0},
- 'b': {0: 0.0,
- 1: 1.0,
- 2: 0.0}}).reindex_axis(
- ['a', 'b', nan], 1)
+ exp_na = DataFrame({nan: {0: 0.0, 1: 0.0, 2: 1.0},
+ 'a': {0: 1.0, 1: 0.0, 2: 0.0},
+ 'b': {0: 0.0, 1: 1.0, 2: 0.0}})
+ exp_na = exp_na.reindex_axis(['a', 'b', nan], 1)
# hack (NaN handling in assert_index_equal)
exp_na.columns = res_na.columns
assert_frame_equal(res_na, exp_na)
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 8682302b542be..0ec2c96dbbd7d 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -25,7 +25,7 @@
from pandas.core.common import (is_sequence, array_equivalent,
is_list_like, is_datetimelike_v_numeric,
is_datetimelike_v_object, is_number,
- needs_i8_conversion)
+ needs_i8_conversion, is_categorical_dtype)
from pandas.formats.printing import pprint_thing
from pandas.core.algorithms import take_1d
@@ -657,7 +657,7 @@ def assert_equal(a, b, msg=""):
def assert_index_equal(left, right, exact='equiv', check_names=True,
check_less_precise=False, check_exact=True,
- obj='Index'):
+ check_categorical=True, obj='Index'):
"""Check that left and right Index are equal.
Parameters
@@ -675,6 +675,8 @@ def assert_index_equal(left, right, exact='equiv', check_names=True,
5 digits (False) or 3 digits (True) after decimal points are compared.
check_exact : bool, default True
Whether to compare number exactly.
+ check_categorical : bool, default True
+ Whether to compare internal Categorical exactly.
obj : str, default 'Index'
Specify object name being compared, internally used to show appropriate
assertion message
@@ -752,6 +754,11 @@ def _get_ilevel_values(index, level):
if check_names:
assert_attr_equal('names', left, right, obj=obj)
+ if check_categorical:
+ if is_categorical_dtype(left) or is_categorical_dtype(right):
+ assert_categorical_equal(left.values, right.values,
+ obj='{0} category'.format(obj))
+
def assert_class_equal(left, right, exact=True, obj='Input'):
"""checks classes are equal."""
@@ -999,6 +1006,7 @@ def assert_series_equal(left, right, check_dtype=True,
check_names=True,
check_exact=False,
check_datetimelike_compat=False,
+ check_categorical=True,
obj='Series'):
"""Check that left and right Series are equal.
@@ -1023,6 +1031,8 @@ def assert_series_equal(left, right, check_dtype=True,
Whether to check the Series and Index names attribute.
check_dateteimelike_compat : bool, default False
Compare datetime-like which is comparable ignoring dtype.
+ check_categorical : bool, default True
+ Whether to compare internal Categorical exactly.
obj : str, default 'Series'
Specify object name being compared, internally used to show appropriate
assertion message
@@ -1049,6 +1059,7 @@ def assert_series_equal(left, right, check_dtype=True,
check_names=check_names,
check_less_precise=check_less_precise,
check_exact=check_exact,
+ check_categorical=check_categorical,
obj='{0}.index'.format(obj))
if check_dtype:
@@ -1085,6 +1096,11 @@ def assert_series_equal(left, right, check_dtype=True,
if check_names:
assert_attr_equal('name', left, right, obj=obj)
+ if check_categorical:
+ if is_categorical_dtype(left) or is_categorical_dtype(right):
+ assert_categorical_equal(left.values, right.values,
+ obj='{0} category'.format(obj))
+
# This could be refactored to use the NDFrame.equals method
def assert_frame_equal(left, right, check_dtype=True,
@@ -1096,6 +1112,7 @@ def assert_frame_equal(left, right, check_dtype=True,
by_blocks=False,
check_exact=False,
check_datetimelike_compat=False,
+ check_categorical=True,
check_like=False,
obj='DataFrame'):
@@ -1127,6 +1144,8 @@ def assert_frame_equal(left, right, check_dtype=True,
Whether to compare number exactly.
check_dateteimelike_compat : bool, default False
Compare datetime-like which is comparable ignoring dtype.
+ check_categorical : bool, default True
+ Whether to compare internal Categorical exactly.
check_like : bool, default False
If true, then reindex_like operands
obj : str, default 'DataFrame'
@@ -1168,6 +1187,7 @@ def assert_frame_equal(left, right, check_dtype=True,
check_names=check_names,
check_less_precise=check_less_precise,
check_exact=check_exact,
+ check_categorical=check_categorical,
obj='{0}.index'.format(obj))
# column comparison
@@ -1175,6 +1195,7 @@ def assert_frame_equal(left, right, check_dtype=True,
check_names=check_names,
check_less_precise=check_less_precise,
check_exact=check_exact,
+ check_categorical=check_categorical,
obj='{0}.columns'.format(obj))
# compare by blocks
@@ -1199,6 +1220,7 @@ def assert_frame_equal(left, right, check_dtype=True,
check_less_precise=check_less_precise,
check_exact=check_exact, check_names=check_names,
check_datetimelike_compat=check_datetimelike_compat,
+ check_categorical=check_categorical,
obj='DataFrame.iloc[:, {0}]'.format(i))
| - [x] closes #13076
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/13249 | 2016-05-21T13:43:29Z | 2016-05-21T23:58:25Z | null | 2016-05-22T01:29:19Z |