擅长:python、mysql、java
<p>韦斯当然是对的!我只是想提供一个更完整的示例代码。我对129 Mb的文件也有同样的问题,解决方法是:</p>
<pre><code>from pandas import *
tp = read_csv('large_dataset.csv', iterator=True, chunksize=1000) # gives TextFileReader, which is iterable with chunks of 1000 rows.
df = concat(tp, ignore_index=True) # df is DataFrame. If errors, do `list(tp)` instead of `tp`
</code></pre>