Python: why pickle?

Pickle is unsafe because it constructs arbitrary Python objects by invoking arbitrary functions. However, this is also gives it the power to serialize almost any Python object, without any boilerplate or even white-/black-listing (in the common case). That’s very desirable for some use cases: Quick & easy serialization, for example for pausing and resuming a … Read more

Saving and loading objects and using pickle

As for your second problem: Traceback (most recent call last): File “<stdin>”, line 1, in <module> File “C:\Python31\lib\pickle.py”, line 1365, in load encoding=encoding, errors=errors).load() EOFError After you have read the contents of the file, the file pointer will be at the end of the file – there will be no further data to read. You … Read more

Is pickle file of python cross-platform?

Python’s pickle is perfectly cross-platform. This is likely due to EOL (End-Of-Line) differences between Windows and Linux. Make sure to open your pickle files in binary mode both when writing them and when reading them, using open()’s “wb” and “rb” modes respectively. Note: Passing pickles between different versions of Python can cause trouble, so try … Read more

multiprocessing.Pool – PicklingError: Can’t pickle : attribute lookup thread.lock failed

multiprocessing passes tasks (which include check_one and data) to the worker processes through a mp.SimpleQueue. Unlike Queue.Queues, everything put in the mp.SimpleQueue must be pickable. Queue.Queues are not pickable: import multiprocessing as mp import Queue def foo(queue): pass pool=mp.Pool() q=Queue.Queue() pool.map(foo,(q,)) yields this exception: UnpickleableError: Cannot pickle <type ‘thread.lock’> objects Your data includes packages, which … Read more

How to change the serialization method used by the multiprocessing module?

I believe the patch you’re referring to works if you’re using a multiprocessing “context” object. Using your pickle2reducer.py, your client should start with: import pickle2reducer import multiprocessing as mp ctx = mp.get_context() ctx.reducer = pickle2reducer.Pickle2Reducer() And ctx has the same API as multiprocessing. Hope that helps!

What is faster – Loading a pickled dictionary object or Loading a JSON file – to a dictionary? [closed]

The speed actually depends on the data, it’s content and size. But, anyway, let’s take an example json data and see what is faster (Ubuntu 12.04, python 2.7.3) : pickle cPickle json simplejson ujson yajl Giving this json structure dumped into test.json and test.pickle files: { “glossary”: { “title”: “example glossary”, “GlossDiv”: { “title”: “S”, … Read more