Short answer to broad question: There are no explicit guarantees made about hashing stability aside from the overall guarantee that x == y
requires that hash(x) == hash(y)
. There is an implication that x
and y
are both defined in the same run of the program (you can’t perform x == y
where one of them doesn’t exist in that program obviously, so no guarantees are needed about the hash across runs).
Longer answers to specific questions:
Is [your belief that
int
,float
,tuple(x)
,frozenset(x)
(forx
with consistent hash) have consistent hashes across separate runs] always true and guaranteed?
It’s true of numeric types, with the mechanism being officially documented, but the mechanism is only guaranteed for a particular interpreter for a particular build. sys.hash_info
provides the various constants, and they’ll be consistent on that interpreter, but on a different interpreter (CPython vs. PyPy, 64 bit build vs. 32 bit build, even 3.n vs. 3.n+1) they can differ (documented to differ in the case of 64 vs. 32 bit CPython), so the hashes won’t be portable across machines with different interpreters.
No guarantees on algorithm are made for tuple
and frozenset
; I can’t think of any reason they’d change it between runs (if the underlying types are seeded, the tuple
and frozenset
benefit from it without needing any changes), but they can and do change the implementation between releases of CPython (e.g. in late 2018 they made a change to reduce the number of hash collisions in short tuple
s of int
s and float
s), so if you store off the hashes of tuple
s from say, 3.7, and then compute hashes of the same tuple
s in 3.8+, they won’t match (even though they’d match between runs on 3.7 or between runs on 3.8).
If so, is that expected to stay that way?
Expected to, yes. Guaranteed, no. I could easily see seeded hashes for int
s (and by extension, for all numeric types to preserve the numeric hash/equality guarantees) for the same reason they seeded hashes for str
/bytes
, etc. The main hurdles would be:
- It would almost certainly be slower than the current, very simple algorithm.
- By documenting the numeric hashing algorithm explicitly, they’d need a long period of deprecation before they could change it.
- It’s not strictly necessary (if web apps need seeded hashes for DoS protection, they can always just convert
int
s tostr
before using them as keys).
Is the
PYTHONHASHSEED
only applied to salt the hash of strings and byte arrays?
Beyond str
and bytes
, it applies to a number of random things that implement their own hashing in terms of the hash of str
or bytes
, often because they’re already naturally convertable to raw bytes and are commonly used as keys in dict
s populated by web-facing frontends. The ones I know of off-hand include the various classes of the datetime
module (datetime
, date
, time
, though this isn’t actually documented in the module itself), and read-only memoryview
s of with byte-sized formats (which hash equivalently to hashing the result of the view’s .tobytes()
method).
What would be a good way to write a consistent hash replacement for
hash(frozenset(some_dict.items()))
when thedict
contains various types and classes?
The simplest/most composable solution would probably be to define your const_hash
as a single dispatch function, using it the same way you do hash
itself. This avoids having one single function defined in a single place that must handle all types; you can have the const_hash
default implementation (which just relies on hash
for those things with known consistent hashes) in a central location, and provide additional definitions for the built-in types you know aren’t consistent (or which might contain inconsistent stuff) there, while still allowing people to extend the set of things it covers seamlessly by registering their own single-dispatch functions by importing your const_hash
and decorating the implementation for their type with @const_hash.register
. It’s not significantly different in effect from your proposed const_hash
, but it’s a lot more manageable.