Talk:Hash table

Latest comment: 4 months ago by ExplodingCabbage in topic Lead

Omitted definition in article: what is "s"?

edit

In the section "Choosing a good hash function", a (presumably integer) "s" is repeatedly referred-to. However this is as far as I can see totally undefined in the wikipedia article. Although (some) computer scientists might infer s from context, other readers may not. This is bad form. Suggest to change.

152.3.68.83 (talk) 18:39, 22 November 2013 (UTC)Reply

Table vs Map

edit
In computer science, a hash table, or a hash map, is a data structure that associates keys with values.

Already the first sentence is misleading, if not even wrong. A hash map associates keys with values (i.e. maps) and is implemented using a hashtable, though a hashtable itself does not necessarily do any mapping or association of keys and values. Just consider a hashtable where all you do is to store integers, there are no values here. —Preceding unsigned comment added by 91.36.112.7 (talk) 14:51, 23 January 2008 (UTC)Reply

This comment was already posted once and was moved to #Confusing a hash table and a hash map because newer entries are supposed to go to the *end*. -- intgr [talk] 23:08, 23 January 2008 (UTC)Reply

Pseudo-Code

edit

I have a few small issues with the current Hash table article. First, the pseudo-code is a little convoluted—in that the findSlot(..) function is not 100% obvious. It hides away the linear probe code, which makes reading the set(..) and lookup(..) functions a little confusing on first read. I was just going over it with an intro computer science student and even had to read it twice myself. Josh 08:11, 10 April 2006 (UTC)Reply

OK, I tweaked find_slot() to make it more obvious (to me) that it is doing a linear probe.
Or did I just make it more confusing?
Please feel free to improve it further.

O(1) Removal

edit

There is a line which states:

The O(1) remove method above is only possible in linearly probed hash tables with single-slot stepping. In the case where many records are to be deleted in one operation, marking the slots for deletion and later rebuilding may be more efficient.

I'm not really sure that this is accurate. I might agree wth calling it expected O(1), or average O(1). But it is not, as far as I can tell, worst-case O(1). In fact, remove is worst-case O(n), as in all hash operations.Josh 08:11, 10 April 2006 (UTC)Reply

In my experience the O() notation usually refers to expected runtime, unless otherwise stated, but feel free to disambiguate it.WolfKeeper 14:07, 10 April 2006 (UTC)Reply

I know that there has been some discussion of amortized runtime here, but I'm not really sure that you can use amortized analysis on this function. Please correct me if I am wrong, otherwise, I will change shortly. Further, I am not sure why the article says only possible in linearly probed hash tables... If the remove function is O(1) in linear probe, then why not with a quadratic probe? Josh 08:11, 10 April 2006 (UTC)Reply

You need the delete to remove spaces. the quadratic probe moves a different amount through the hash table, depending on the initial number from the hash function, rather than the slot that the initial collision occured at. So two hash entries may have collided initially at say, slot 10; and one of them got put into slot 20. If slot 10 is deleted, with quadratic probing there's no way to compact down the slot 20 into slot 10, because there's no way to find it- the hash of the object at 10 doesn't tell you how to find any of the successors. With linear probing you just need to look in the next entry and then decide whether to move it or not.WolfKeeper 14:07, 10 April 2006 (UTC)Reply
I think that this discussion reveals a real need to disambiguate expected vs. worst-case runtime. I think that we'll both agree that remove(..) in a hash table requires worst-case O(n) steps, even with a linear probe. If you don't then we need to first discuss that..
If you don't see why the worst-case isn't particularly relevant, then I don't want to discuss this further.WolfKeeper 16:31, 10 April 2006 (UTC)Reply
I found a very good explanation in CLRS03, Chapter 11. Basically, the text proves that, on average,   probes are needed in an unsuccessful search, where   is the load factor. Using this formula, we can see that even at  , the number of probes required is 10. I would still argue that worst case is relevant, but this clearly does not apply until the hash table is very full. The magic number of 80% is explained in the article, but perhaps this would be better understood with a graph illustrating how the performance changes with load? --Josh 01:24, 11 April 2006 (UTC)Reply
Now, lets go back to what you're saying. Yes, you are correct, that in a linear probe, you only need to look at the next slot, although I would nitpick and say that you need to look at the next n slots if the hash table is full. However, in a quadratic probe, instead of saying  , you say  . We assume that hash(key) was cached and does not need to be recomputed. Therefore, as long as we know i, we can clearly find the next slot in constant time. The same applies for double hashing. You just say  . Since we already know hash(key), then hash(hash(key)) takes O(k), as per previous discussion.
I'm sorry if my explanation was inadequate. I would recommend you read Knuth or a good book on it.WolfKeeper 16:31, 10 April 2006 (UTC)Reply
I've looked through CLRS, and I don't really see a reference for the fact that an O(1) remove can only be achieved using a linear probe. If we can do a search in any open address hash table in constant time, then shouldn't we be able to delete elements and eliminate holes in constant time as well? --Josh 01:24, 11 April 2006 (UTC)Reply
In summary, I think that there needs to be a lot of reworking between the term O(..) and expected O(..) and worst-case O(..). Further, I think that we might argue that remove(..) is expected O(1), but not worst-case O(1). I think that it is always worst-case O(n) with linear and quadratic probing. With double hashing, we would say that it is worst-case O(k n), assuming that k is not fixed size. Josh
With all due respect, I completely disagree. The whole point of hash tables is to design them so that you are able to apply statistical techniques like averaging. Using worst case values for randomised parameters gives significantly inaccurate results.WolfKeeper 16:31, 10 April 2006 (UTC)Reply
I'm not suggesting that we re-do all of the analyses in terms of worst-case performance. I understand that doing so would give an incredibly inaccurate picture of hash table performance. I simply think that we should qualify the asymptotic notation with "expected", to acknowledge that bad cases do exist. --Josh 01:24, 11 April 2006 (UTC)Reply
I agree with Josh here. Unqualified big O notation specifically implies worst-case behavior in many publications. Better to either add the word "expected" where appropriate or make a big sweeping blanket statement at the beginning that it's implied. Deco 11:30, 8 June 2006 (UTC)Reply


"When we say T(N) [is] O(f(N)), we are guaranteeing that the function T(N) grows at a rate no faster than f(N)." (Data Structures and Algorithm Analysis in C , Third Edition, Mark Allen Weiss, page 44)
It is entirely possible to implement a hash table that makes a collision every time a new value is added. This would be a useless implementation, but a valid one. A hash table can't then be O(1) ever in any way if the above possibility exists.
"Hash tables can be used to implement the insert and contains operations in constant average time...The worst case for hashing generally results from an implementation error." (page 207)
I would suggest that the article should be changed to mention that hash tables are not O(1) by the definition of Big-O notation, but that all good implementations come closer to acting like they are than binary search trees. 199.111.229.133 00:23, 16 October 2007 (UTC)Reply

Something i don't understand from the article

edit

If when adding to a table the hashing function leads to a collision for 2 particular keys, a and b, then using probing b will be stored somewhere after a. When then looking up the value associated with b, won't the table actually return the value associated with a. How does it know which of the two values to return, and how to get to the index associated with b?

I hope that makes sense. Iae 11:22, 8 June 2006 (UTC)Reply

Providing we're not using perfect hashing, we must search through all the places the desired element could be, comparing our search key with each one, until we hit the end of the list. This is why hash tables require not only a hash function but a means of comparing for equality. This only works efficiently because in a hash table with enough room, these lists are overwhelmingly very short. Deco 11:27, 8 June 2006 (UTC)Reply
Ah I see, thanks very much. Iae 11:49, 8 June 2006 (UTC)Reply

how do you delete from a hash table that uses probing?

edit
How do you know that you have hit the end of the list. Or to cut to the gist of the matter; How do you delete from a hash table that uses probing.

You know you hit the end of the list in a (probed) hash table when you hit a empty "not occupied" slot. In theory, one could have a "occupied" bit for each row of the hash table that is initially 0. (In practice, typically each slot that is not occupied begins with a NULL byte. If it *is* occupied, that byte is the first byte of the key).

I know of 2 very different ways to delete from a hash table: the "deleted bit" method", and the "move stuff around" method. (My understanding is that the "move stuff around" method is impossible to implement with "quadratic probing" or "double hashing" hash tables (and probably a few other types). Those hash tables are forced to use the "deleted bit" method.)

Let me try to explain those 2 methods

"deleted bit" method: works with any kind of hash table. Every row of the hash table (in addition to the key, value pairs) has a "deleted" bit that starts out 0. To delete a record from the hash table, use the key to find it (using find_slot), then set the "deleted" bit to 1. (Some hash tables cram the "deleted" bit and the "occupied" bit into the "key" part of the record, reserving 1 special key to indicate "unoccupied", another special key to indicate "deleted", and any other value to indicate a real occupied "key" ).

"move stuff around" method: only works with linear probing.

The function remove(key) in the article is supposed to describe the "move stuff around" method. How could we make it easier to understand?

Often when the application deletes a record, the following slot is already not occupied. In that case you can wipe out the record by marking that record as not occupied -- overwriting the key with a NULL byte -- and you are done.

Unfortunately, there are a bunch of special cases the application needs to be able to handle, even though they rarely happen. As the article says, "For all records in a cluster, there must be no vacant slots between their natural hash position and their current position (else lookups will terminate before finding the record)." The application needs to scan through *all* the records between the record you want to delete, and the next "not occupied" slot, and make sure the above requirement is met. In some cases, you must move all those records up 1 slot. In other cases, you must move some (but not all) of those records.

In yet other cases, you must not move any of them, just mark the deleted slot "not occupied" just like the simple case. (For example, if you want to delete the record in slot 870, and you see that the "key" in slot 871 actually hashes to "871", and slot 872 is "not occupied" -- you must mark slot 870 "not occupied", and leave the record in slot 871 alone).

Once you understand how it works, please update the article to make it easier to understand for the next person.

--68.0.120.35 07:42, 5 March 2007 (UTC)Reply

More concrete suggestions for hash function?

edit

I wanted to check in here before making huge changes to the article, but one thing I'd find very helpful is a discussion of concrete choices for the hash function itself. Here's an outline of what I'd say:

It's very common to implement hash tables with poor hashing functions. Knuth is largely to blame, advocating the very weak "multiplicative hash" function, and even going so far as to claim that its clustering property is a good thing! (section 6.4, TAoCP vol III, 2e, p. 517). Variants of the multiplicative hash remain popular, as do other linear techniques such as CRC.

Surveying the field, two excellent newer alternatives stand out. For most simple hashing tasks, the Fowler Noll Vo Hash is an excellent performer. It is among the simplest of known hash functions, is quite fast, and has a good distribution. For use in a hash table, the FNV-1a variant is likely the best choice, as it has better (more dispersed) clustering behavior than FNV-1.

For some applications, particularly when keys are long, the newer Jenkins lookup3.c hash function may be a better performer. It achieves better speed by consuming 12 bytes of the input per iteration, as opposed to one byte for FNV. Disadvantages include greater code complexity, a sensitivity to machine endianness (causing potential difficulties when reusing hash values across disparate computers), and the need to pad byte-aligned input to 32 bit words. Do keep in mind, though, that benchmarks showing impressive speed for large blocks of data may not translate to real-world gains. Common usage patterns involve relatively short keys, so the amount of time spent in the hashing function inner-loop may be less relevant than, say, the gains from a compiler being able to automatically inline a simpler hash function.--Raph Levien 02:32, 5 July 2006 (UTC)Reply

Some interesting links I came across while searching:

You might also want to check out HSH 11/13 which seems to distribute quite well (as per published graph) and also performs nicely with its handful of code lines.

Followup:

I went ahead and completely rewrote the section. I may have come close to bending the letter of the law on doing original research (measuring avalanche behavior of Jenkins One-at-a-time hash using Bret Mulvey's tools) and NPOV (I advocate particular hash functions and trash criticize other popular choices), but everything I said is verifiable, and I think the result is the most helpful and reliable advice on choosing hash functions anywhere on the Internet or in dead tree form.

 
Avalanche behavior of HSH 11/13 hash over 3-byte keys

I measured the HSH 11/13 hash using Mulvey's AvalancheTest, and found it slightly inferior to the Jenkins One-at-a-time hash. Herbert Glarner's tests for uniform distribution are not as sensitive as Mulvey's chi-squared tests. Further, HSH 11/13 is quite slow, because of its inner loop.

Obviously, I also changed my mind about FNV, swayed in large part by Mulvey's analysis. It's not a bad choice, being one of the simplest of the "pretty good" choices, but even there the XOR-folding adds a bit more complexity. In any case, I feel like I've provided enough information for people to make their own informed choice.--Raph Levien 21:31, 12 August 2006 (UTC)Reply

I agree that this analysis of various hash functions are worth putting into Wikipedia.
But wouldn't the hash function article be a better place?
Or are there special considerations for hash functions used in a hash table that don't apply to hash functions used for other purposes?
--68.0.120.35 07:42, 5 March 2007 (UTC)Reply
Yes, this section evaluates hash functions solely for their use in a hash table. Functions like the Jenkins one-at-a-time are very well suited for such uses, and extremely bad for other hash applications like message integrity checking, which is the domain of cryptographic hashes. Fair enough? --Raph Levien 04:12, 9 May 2007 (UTC)Reply
Referring to the statement: "Further, HSH 11/13 is quite slow, because of its inner loop." - The HSH documentation states, that a "key like 'Yvonne' [...] requires 92 machine instructions to generate a hash value". - Now I am just wondering, does there exist a speed comparison of any sort in order to choose a fast one? - Regards, --Gulliveig 16:02, 14 August 2007 (UTC)Reply

Benchmarks

edit

Simplicity and speed are readily measured objectively

There is a caveat here. Experimental measurements of speeds are necessarily done on a "representative sample" of inputs. It may be the case that such or such algorithm performs with varying speed depending on the kind of inputs, and that some sample, representative of one kind of inputs, may not be representative of another kind. I don't think this would happen with usual hash functions on e.g. strings but this may happen in more esoteric cases. David.Monniaux 14:38, 13 September 2006 (UTC)Reply

Without checking further into the matter of this article specifically, the speed of an algorithm is usually (or at least often) expressed as the worst case speed, using the Big O notation. It is an objective measurement which gives the asymptotic upper bound of the execution time as a function of the input length. Of course you are correct in that one algorithm may perform well with a certain kind of input and badly with other kinds of input, but if one algorithm always works in O(n) time, it is, after certain point, always faster than an algorithm that works in O(n2) time. It's just pure mathematics to define that point, as is calculating the asymptotic upper bound too.
I would be more concerned about the simplicity claim. "Simplicity" is not something you can measure mathematically. You can always count the instructions etc but that's both machine and implementation dependent. —ZeroOne (talk / @) 16:44, 14 August 2007 (UTC)Reply

Unfinished question

edit

Problem 1: The Hash Table will be used for storage of student records. You may assume the maximum number of Items will not exceed 240. An item corresponds to a student record. A key is taken to be an ASCII character string of maximum length 32 containing name of the student. The data is taken to be 9-digit id and the name of the state of residence of the student in India. A list node contains an Item and a reference to the next node. You will need to implement the classes Item and the ListNode with construct and appropriate data access operations. 2. Write a main() program to test your class HashTable. 3. Input: The name of the input file, containing data items (student records) and data operations in the format given below, and the name of the output file, will be passed to your program on the command line or interactively. Data Items: student-id, student-name, state-of-residence Example. 200412018, Swati Mishra, Uttar Pradesh one per line. Data Operations: <operation> <item> The <operation> is s,i, or d for search, insert and delete, respectively. The item will have fields: student-name, student-id, and state-of-residence. The fields of an item will be separated by commas, and operation will be separated from the item by a colon ”:” and a space. Example. s: 200211001, Akash Gokhale, Maharashtra one per line. The data items will be separated from the data operations by a blank line. 4. Output: Your program is to read the input file and populate the Hash Table with student records by repeated Insert() operations. And then print to the output file, the size of the Hash Table, and the size of each linked list in the Hash Table. Then, it will continue to read each line and execute appropriate data operations. Following each Insert()/Delete() operation, it will output the size of the Hash Table and the size of each of the linked list, and for each Search operation, it will output

-- 220.225.53.35 09:18, 11 October 2006


joaat_hash function error

edit

Hi, I've implemented the joaat_hash function which is described in pseudocode on this page in my C Program, and encountered an error. Better said I produced one, since I misunderstood len as the len of the hashtable, and not as len of the key.

here is my (correct) function implemented in C, please consider updating the article:

int joaat_hash(char *key, size_t len) //len is the size of the hashtable
{
    unsigned int hash = 0;
    unsigned int i;
    
    for (i = 0; i < strlen(key); i  )
    /* [...] as in the article */ 
    
    return (hash % len);
}

--88.76.141.17 03:43, 2 January 2007 (UTC)Reply

You are right -- the "len" in the pseudocode in the article is the length of the key. (That pseudocode gives the same results as the original C implementation "One-at-a-Time Hash" ub4 one_at_a_time(char *key, ub4 len) http://www.burtleburtle.net/bob/hash/doobs.html , right?)

I think the above implementation gives the same results as the original C implementation for all possible ASCII strings. Unfortunately, the above implementation gives different results for other sorts of data structures cast into byte arrays, when those data structures include a zero byte.

Since the original version gives *better* results for those data structures, and the *same* results for everything else, I think I'll stick with the original version, except use "key_len" to clarify exactly of what it is the length. (Or have I missed something obvious?) --68.0.120.35 07:42, 5 March 2007 (UTC)Reply

cryptographic hash functions

edit

From the article: "In fact, even a cryptographic hash does not provide protection against an adversary who wishes to degrade hash table performance by choosing keys all hashing to the same bucket."

I thought that one of the criteria for a cryptographic hash function was that it be infeasible to find collisions. Therefore, it would provide defense against such an adversary. Or am I misunderstanding something? Ralphmerridew 21:55, 30 January 2007 (UTC)Reply

Hash tables are usually very limited in size, and the length of the hash function is clipped modulo the hash table size. That is, while a hash might produce a result of 160 bits, it is obvious that a hash table of 2160 entries would be infeasible, not to mention useless. In-memory hash tables rarely exceed millions of entries, and it is relatively trivial to brute force through this amount of hashes for finding collisions. For a sense of magnitude, a low-end processor today can compute around half a million SHA-1 hashes of 16-character strings per second. -- intgr 23:07, 30 January 2007 (UTC)Reply
Doesn't that require that the attacker also knows the hash table size? (Alternately, if the attacker adds enough entries to be able to work out the size, it's also likely to force a rehash.) Ralphmerridew 00:51, 31 January 2007 (UTC)Reply
Well yes, if secrecy is a choice, it's a good idea to choose a large prime as the hash table size. This is, however, unrelated to whether one is using a cryptographic hash function or a classic one — it is impossible to cause collisions if you cannot predict the modulus or one of its factors.
But lots of hash table implementations resize automatically to predefined constant sizes, and as far as I know, many even use the worst choice of power-of-two sizes. Power-of known number sizes mean that the attacker can cause clustering even if they mispredict the size, and that the attack is effective even through reclustering. This is because   when  , and even if  , it will just cause clustering on   different keys simultaneously, which can still be fatal with large hash tables. -- intgr 07:19, 31 January 2007 (UTC)Reply
I have no idea what I was thinking earlier, the equation should be  . -- intgr 17:41, 31 January 2007 (UTC)Reply
Re point 1, with a bad hash function, an attacker can choose keys that hash to the same value, which will cause collisions regardless of modulus. Ralphmerridew 16:41, 31 January 2007 (UTC)Reply
Oh, were you thinking of generating hashes small enough to always be less than the modulus? Good point, never thought that. (though I'm not the author of the quoted claim) -- intgr 17:41, 31 January 2007 (UTC)Reply
No, I mean that, with a bad hash function, an attacker could, say, generate arbitrarily many strings that hash to, say, 0x16de fa32 4261 1ab3. Whatever modulus is used, all those strings will fall into the same bucket. With a cryptographically secure hash function, given a known modulus, an attacker might be able to produce a large number of strings such that (hash % modulus) are all the same, but he'd be unable to produces a significant number that all have the same hash value. Ralphmerridew 21:43, 31 January 2007 (UTC)Reply
I wouldn't count on the speed (well, slowness) of a hash function for protection against clustering attacks. While it makes the attack slightly more expensive, it also complicates hash table inserts and lookups by the same proportional amount. If you can cope with the extra processing overhead, you're most likely better off with data structures that do not exhibit such critical worst-case performance, such as various balanced trees.
Perhaps this is an overly cryptographic point of view, but it is not that expensive to generate truncated hash collisions for cryptographic hash algorithms by brute force. And as mentioned above, a general purpose low-end processor (Athlon 64 3000 here) is capable of generating half a million hashes per second. A heavily parallelized FPGA-, or even ASIC-based chip could improve that by several magnitudes. (Such programmable FPGA computers are readily available on the market, e.g. COPACOBANA, which can do 1013 DES trials per second) -- intgr 22:37, 31 January 2007 (UTC)Reply
I'm not depending on "Given string 'str', calculate hash(str)" being slow; I'm depending on "Given 'value', find a string 'str' such that hash(str) == value". The latter is part of the definition of cryptographically secure. And even 10^13 trials/second means brute force will take about three weeks per full collision with a 64 bit hash, and is ineffective against even the deprecated MD5 (128 bits) or SHA-1 (160 bits). By comparison, IIU multiplicative hash C, it's possible to find a full collision in O(#bits) time. Ralphmerridew 23:12, 31 January 2007 (UTC)Reply
Did you forget that to utilize all these 64 bits, you need to store the table somewhere? There are no practical applications of a 264-entry hash table, and the space requirements are proportional to the brute force collision-finding time (both scale O(2n bits)). Just storing this many hashes (or 8-byte values) alone will take up   of space. If you create a smaller hash table, you have to truncate the hash (throw away some of its information), which inherently speeds up brute force cracking. -- intgr 23:29, 31 January 2007 (UTC)Reply
But a collision against a truncated hash is only useful against a small number of moduli, and then only if the attacker knows or can work out the modulus. Ralphmerridew 23:57, 31 January 2007 (UTC)Reply
This seems to effectively boil down to what I formulated earlier, "were you thinking of generating hashes small enough to always be less than the modulus?". I do agree that in case the attacker does not know the modulus, and the modulus is a non-tiny prime, then this effectively disables clustering attacks. I disagree that when the modulus is known, the hash table needs to be "small", but let's leave it at that. -- intgr 00:54, 1 February 2007 (UTC)Reply
Well, I've changed the article now and put a {{fact}} on it, since we still need to cite a source for it. Or do you happen to have one? -- intgr 12:41, 5 February 2007 (UTC)Reply
I agree that, for a certain special case, Mallory (the attacker) can guarantee that all the keys he generates hash to the same bucket, even when a cryptographic hash is in use.
That special case happens when an attacker doesn't know the exact table size, but does know that it is a power of 2, and at most some maximum size -- Mallory knows that slot_number = hash(key) % 2^n, and he knows the particular cryptographic hash() used, and although he doesn't know n exactly, he knows some k such that n <= k.
By doing some work reminiscent of hashcash to zero out the k least-significant bits, Mallory can generate keys that all hit the same bucket. Mallory generates O(2^k) trial keys before he finds one that he knows will hit the same bucket. (With most non-cryptographic hashes, Mallory only needs to do O(1) work to construct a key that will hit the same bucket).
But so what? Is there any application where this relevant?
What sorts of applications need to accept keys from a potentially malicious attacker?
Say we did the "secure" thing by picking a L bit cryptographic hash, and "randomly" picking a new large prime number p every time we resize the table, and using slot_number = ( hash(key) % p ) % tablesize. (Since tablesize << p << 2^L , does it matter whether the tablesize is a power of 2 or not?).
Even with this "secure" hash -- even if Mallory has no clue what hash we are using or how we are reducing it to the tablesize -- Mallory could still force collisions by sending O(tablesize) submissions.
(By the birthday paradox, he is *likely* to cause a collision even with O(tablesize^(1/2)) submissions).
Is there some application where this "secure" hash would work, but the above power-of-2 "special case" wouldn't work?
--68.0.120.35 07:42, 5 March 2007 (UTC)Reply
The article text now says that "However, using cryptographic hash functions can protect against collision attacks when the hash table modulus and its factors can not be kept secret from the attacker, or alternatively, by applying a secret salt." (there's a comment there saying "see discussion on talk page; just needs a reference"). This is not true by the argument already given here: If (as the article says) we assume that the attacker knows the hash table modulus, then he may search for collisions by brute force, as the effective amount of bits is log(number of buckets the hash table can use), which is generally very small by cryptographic standards. Salting works with cryptographic hashes, but for non-cryptographic ones there are no guarantees.
In addition, the case where the attacker can't predict the modulus with a reasonable probability, i.e. when it's not feasible to just generate collisions for the most probable modulus, then the second most probable, etc. until the modulus is discovered, does not seem very important. Which real-life hash table implementations have moduli chosen with this in mind?
There's also the possibility that information about a modulus can be obtained interactively by throwing values at the hash table and seeing if the response time slows down, even if slightly (see timing attack). A similar attack might work to figure out an effectively equivalent salt (producing the same internal state after processing the salt and before processing the key) when non-cryptographic hashes are used: The probability of certain classes of keys colliding might depend on e.g. whether a certain bit of internal state is 1 or 0, so one might throw keys from those classes at the hash table, measure slowdown and compute a new probability for that bit being 1 or 0, and so on independently for each bit, effectively performing binary search for the salt. -- Coffee2theorems (talk) 20:29, 15 June 2008 (UTC)Reply
Yeah, using a good cryptographic hash with a decent salt should work OK, independently of the modulus though.- (User) WolfKeeper (Talk) 23:28, 15 June 2008 (UTC)Reply

What lots of people seem to forget when discussing the need for hash functions that reduce predictable collisions is, that they are only needed for hostile environments, where an attacker can control the input data *and* where it is important that proper speed is maintained (e.g. tcp/ip hashtables within a kernel). It is much much less important in simple programs where a hashtable is just used to store some data that might even be created dynamically by the program, or doesn't occur in the input set the program was designed for. It might be possible to degrade a hashtable that just uses some "id * 1000000007UL & ((1<<n)-1)" for finding the appropriate slot in your face recognition software by feeding it with a carefully crafted artificial bitmap pattern. but why care? trash-in, trash-out.

Verbiage?

edit

A recent edit deleted most of the following paragraph, claiming it was "verbiage":

"Cryptographic hash functions are believed to provide good hash functions for any table size s, either by modulo reduction or by bit masking. They may also be appropriate if there is a risk of malicious users trying to sabotage a network service by submitting requests designed to generate a large number of collisions in the server's hash tables.[citation needed] However, these presumed qualities are hardly worth their much larger computational cost and algorithmic complexity, and the risk of sabotage can be avoided by cheaper methods (such as applying a secret salt to the data, or using a universal hash function)."

This paragraph is explaining that *cyptographic* hash functions (a different concept altogether, see lead section) are not necessarily good choices for hash tables, because their only advantage (probabilistically guaranteed good performance even on data submitted by hostile clients) can be obtained at smaller cost by using ordinary (non-crypto) hash functions with secret salt (as discussed above). The recent edit removed this information. But perhaps the wording is not clear and needs to be improved. All the best, --Jorge Stolfi (talk) 21:28, 6 February 2010 (UTC)Reply

Open hashing

edit

I'm considering merging the open hashing article with this article, as has already been done for closed hashing. For that I think we should rename the section on Chaining to Open hashing and make more clear the already mentioned fact that linked lists are only one way, though perhaps the most popular one, of organizing hash buckets and introduce a mention of buckets allocated in a contiguous memory/disk space that is presently discussed in open hashing. Jyotirmoyb 05:16, 20 February 2007 (UTC)Reply

I agree -- let's keep all the different types together in this one article until the article gets too big. At that time, it will (hopefully) be easier to look at the article and decide the "natural" breaking points to split it up into multiple articles. Big Buckets First. --68.0.120.35 07:42, 5 March 2007 (UTC)Reply
I agree, too. It makes more sense to contrast "closed hashing" with "open hashing". I'd like to see the two sections so labeled.Anjin\\talk 02:23, 25 April 2007 (UTC)Reply
I agree, too. I can see no good reason why not to incorporate such a strongly related topic into this article. I find the question of length of the article largely irrelevant as long as the topics in it are relevant. I'd rather read one long coherent article (possibly skipping sections) than having to jump back and forth in Wikipedia. My 5 cents... TFJ 09:54, 14 June 2007 (UTC 1)

Makes sense - can't have it both ways - with closed hashing merged (thought it's hard to find any content called "closed hashing" in the article) and open hashing in its own article. On the other hand, there is no problem with an article on "open" and "closed" and other types of hashing if there is enough content to justify. Does wikipedia ave guidelines on article length??--121.45.246.110 14:10, 9 April 2007 (UTC)Reply

I agree with all this, because open hashing is simply irrelevant in any context outside of hash tables. Dcoetzee 01:13, 15 June 2007 (UTC)Reply
The Wikipedia guideline on article length is the Wikipedia: article size guideline. --DavidCary (talk) 20:02, 18 November 2021 (UTC)Reply

Bug in 'remove' Pseudocode

edit

The remove function will loop indefinitely if the hash table is full, since it only exits when it finds an unoccupied slot. I think adding the following after the j := (j 1) line should fix it:

 if j = i
   exit loop

32.97.110.142 20:36, 11 April 2007 (UTC)Reply

Ambiguity in amortized analysis

edit

In the discussion of the costs of resizing the hash, there seems to be an ambiguity in the amortized running time analysis:

... If in the end it contains n elements, then the total add operations performed for all the resizings is:

1 2 4 ... n = 2n - 1. Because the costs of the resizings form a geometric series, the total cost is O(n)....

The n in "2n - 1" is actually the value of the nth element, not the number of elements. The sentence that follows makes it seem that the sum of a geometric series is linear, which is clearly not the case.

Guray9500 00:47, 16 April 2007 (UTC)Reply

Actually, it is the number of elements. I'm not sure what makes you think it's the value of the nth element - it's not. And the sum of a geometric sequence is linear in n, if n is the last term of the sequence. Dcoetzee 11:34, 9 June 2007 (UTC)Reply

Mistake in table resizing section ?

edit

The article states:

To see why this is true, suppose a hash table using chaining begins at the minimum size of 1 and is doubled each time it fills above 100%. If in the end it contains n elements, then the total add operations performed for all the resizings is:

1 2 4 ... n = 2n - 1.

This does not seem right. The expected number of steps it takes to double the table size from (1) to (n>1) at each overfill should be: truncate(LOG2(n-1)) 1 (whare LOG2 means logarithm base 2 of n). Also, if my math is correct 1 2 4 ... n = n(n 1)/2, but this does not seem the right computation to make: we're not supposed to add the values corresponding to each step but to count them.

If this reasoning is correct the total cost should read O(LN(n)) rather than O(n), which should mean this method scales very well with n.

I can't understand your reasoning at all. First of all 1 2 4 ... n is in fact 2n - 1, and not n(n 1)/2 (this is 1 2 3 ... n). Second, this is summing the add operations performed during the resizings. During each resizing, all elements currently in the table have to be added again to the new table, and each resizing occurs at a power of 2 number of elements, so the numbers being added are correct. I really don't understand what you're saying. Dcoetzee 10:29, 9 June 2007 (UTC)Reply

How this can be true (question about statement in section 'Time complexity and common uses of hash tables')?

edit

In that section it is written: "Like arrays, hash tables provide constant-time O(1) lookup on average, regardless of the number of items in the table." (*)

Searching in sorted list of keys by using binary search can be done in O(log N). Average running time of binary search is also log N (this is according to http://planetmath.org/encyclopedia/BinarySearch.html). I don't know any better algorithm for non special sorted data. So I think statement (*) isn't true in asymptotic sense.

However if we know that N is bounded by some number we can say that O(log N) = O(1). But this would be wrong... But in some philosophical (or maybe practical) sense we could be right (we have large but fixed maximum amount of RAM, HDD space, maximum amount of hash table entries etc). But this would be not right in asymptotic sense where N not bounded but free.

What I don't understand?

Really, in a fairly real sense, hash tables don't search, they go straight to the right place in the table.WolfKeeper 22:56, 28 June 2007 (UTC)Reply
At a maximum table utilisation factor (say 80%), the Hash table will do on average a constant number of comparisons (k= ~2 at 80%) which is still an O(1) operation, as k is independent of n, the size of the table. In other words, at 80% full, there's going to be only about 2 colliding entries. Hope this helps.WolfKeeper 22:56, 28 June 2007 (UTC)Reply
Thanks, WolfKeeper and sorry for my ignorance. Your answer made me do more searching and I understood that, I was confusing hash_map with map (in C terminology). map permits lookup, insertion and removal in logarithmic time on average. And hash_map permits these operations in constant time on average. 78.56.1.150 12:37, 29 June 2007 (UTC)Reply

Implementations

edit

This section is quickly devolving into yet another trivia list. MegaHasher 19:45, 11 July 2007 (UTC)Reply

Hash table and binary tree in one data structure

edit

Are there any references on a data structure that implements a hash table and binary tree at the same time? MegaHasher 20:04, 3 August 2007 (UTC)Reply

Um, huh? It's not clear to me why you'd want to do that, or how that even makes sense. Can you expand a bit on why you're asking? Dcoetzee 02:45, 4 August 2007 (UTC)Reply
Seems that a treap could be combined with a hash table. I wasn't able to come up with any citations though. MegaHasher 06:11, 14 August 2007 (UTC)Reply
I still have no idea what you mean. Combined in what way, for what purpose? Dcoetzee 08:13, 14 August 2007 (UTC)Reply
to provide O(1) read access, and ordered traversal at the same time MegaHasher 08:19, 14 August 2007 (UTC)Reply
You could certainly do that by merely keeping a hash table and a binary tree at the same time with the same contents - but on modern systems, I don't think the lookup cost of hash tables compared to an effective cache-aware tree data structure is significantly better (it may even be worse). Considering the space overhead and time overhead for other operations, I wouldn't consider this an effective solution. Dcoetzee 21:50, 20 August 2007 (UTC)Reply
I think the ordered searching and hashing ideas can be combined in the hash table algorithm itself, as seen in this paper, but it does not seem to be exactly what you want. Also, instead of linked lists for buckets, you can use any other data structure that implements the necessary operations, even trees, but that may still be not what you want. Honeypot95 (talk) 12:29, 20 April 2020 (UTC)Reply

Citations

edit

This article could be improved with more direct citations. MegaHasher 06:11, 14 August 2007 (UTC)Reply

Random table based hash function

edit

I am suspicious of the random table based hash function. This seems to be a variation on Pearson's hash function, except there are multiple tables, therefore much worse cache misses. Pearson's hash function is not all that fast to start with. MegaHasher 03:22, 18 August 2007 (UTC)Reply

For single-byte indices and 32 bit hash numbers, the tables are 256*4 = 1024 bytes each. This algorithm is obviously best when the cache size is larger, but also very suitable when a few keys account for most of the lookups, or when the cost of a collision is far greater than the cost of a memory cache miss (such as when hashing into disk records).

By having multiple tables, this algorithm ensures lookups to the same index in the table don't cancel each other out, thereby generating literally perfectly randomly distributed but repeatable hashes. The Wikipedia article on Pearson's hash function indicates it employ a table with the numbers 0 to 255 randomly arranged, and it ultimately selects one of the 256 different values in the table at random. The multi-table algorithm you removed doesn't simply select one of the 256 values in one of the tables, it xors values together to create a far stronger hash. Look at the algorithm properly and you might appreciate it more.

FWIW, the algorithm is proven technology, used to hash > 3 million highly-repetitive financial security keys by the world's leading financial data provider's proprietary security database, handling hundreds of thousands of database inserts per second.

I'll reinstate the algorithm, with the attribution removed as you've (offensively) referred to it as "vanity" material, and I hope you, Mr "MegaHasher", have much better reasons before you remove it again.

Wikipedia generally does not accept original research, but you are welcome to publish your algorithm in a different location, and submit to Wikipedia as a cited work of concise length. The article of Hash function is probably a better location. MegaHasher 04:19, 29 August 2007 (UTC)Reply

I don't consider this to be original "research". It is simple stuff, and important precisely because of that. Too many programmers either don't use hash tables or use poor algorithms because they don't have a feel for or ability to assess mathematical hashes. This may suit the few specialists who write the algorithms and relish the elitist mystique of it all, but it's a big loss for computing as a whole. More generally, I might accept that this belongs on the Hash Function page if you also moved the joaat_hash off this one. You can't have it both ways. I agree with the earlier discussion comments that your posting and defense of Bob Jenkin's algorithm looks very inappropriate on this page, and all the more so for your refusal to accept anything that removes any of the focus from it. —Preceding unsigned comment added by 205.228.104.142 (talk) 23:59, August 29, 2007 (UTC)

Wikipedia's policy is very simple. Please look at the welcoming material in your user talk page. You need to publish your full article in a location off Wikipedia, then write an one paragraph summary of it on Wikipedia, and give a citation link to your full article. MegaHasher 18:20, 30 August 2007 (UTC)Reply
As far as I can tell, the hash function that was deleted 02:46, 29 August 2007 is an implementation of Zobrist hashing, a topic which has more than enough references to prove its Wikipedia:Notability. The joaat_hash function has been moved from this article to Jenkins hash function#one-at-a-time. Further discussion of those functions should probably go on those respective specific talk pages, or on the more general hash function talk page. --DavidCary (talk) 04:32, 13 August 2016 (UTC)Reply

Load factor

edit

I have seen two separate referenced articles that had measurements that show under separate chaining, an increase of load factor by 1 has the impact of increasing CPU instruction count around 4 to 5%. This is much better than what I would have guessed. The first reference is "Dynamic Hash Tables" by Larson (1988), and the second reference is "Main-memory linear hashing" by Pettersson (1993). Have I completely mis-read these articles? MegaHasher 21:33, 20 August 2007 (UTC)Reply

Picture

edit
File:Hashtable.PNG

Can somebody please add this picture to the article? I believe it will help a lot of people to understand the notion of the hash table. --yanis 12:25, 21 August 2007 (UTC)Reply

Yes, it is a good picture, but no we can not add it to the article since at the image page you state that you copied it from a non-free source. But as you state on the image page we can draw a license free image that is inspired by it. Then I suggest some changes:
  • Make it look like the images we already have (colour match, shapes and so on).
  • The output of the hash function would be more clear if it is written as [04] instead of [4].
  • The hash function would be more clear if it is written as "Key modulo 100" instead of "Key % 100". Of course, a better hash function usually is to do modulo for instance 101 but that would make the image unclear.
  • Perhaps use the phone number as key?
  • Each record must also contain the key, since hash tables need to check if they found the right record or not. So I suggest the record should hold the key (the phone number) and some data (the persons name), just like the other images we now use.
Since I made most of the images in the article now I might take a shot at it when I am in the mood. (But I had lots of input from others when doing them and some one else remade them as SVGs.)
--David Göthberg 13:00, 21 August 2007 (UTC)Reply

Not really O(n)

edit

Like arrays, hash tables provide constant-time O(1) lookup on average, regardless of the number of items in the table. However, the very rare worst-case lookup time can be as bad as O(n).

This is essentially wrong. In a non buggy implementation which is when the table is a reasonable size, and the table is never too full and the hash function is working correctly n-collisions cannot occur.WolfKeeper 14:36, 16 October 2007 (UTC)Reply

WolfKeeper: You have no idea what you are talking about. In standard implementations in standard SW like all dynamic languages the linear chaining strategy is open to well-known hash collision attacks, which can be defeated even with randomized seeds. So hash tables need to protect themselves against those O(n) worst-cases, which is e.g. a DOS attack on the web against php, perl, ruby with well-crafted keys. And those O(n) possibilities lead the implementations as those attacks appear in practical web security.ReiniUrban (talk) 19:12, 2 April 2014 (UTC)Reply

That's wrong for another reason. Dynamic perfect hashing methods like cuckoo hashing provide constant lookup time. Only updates may take O(n) time in the worst case. 23:24, 2 November 2007 (UTC)

The probability of n collisions by sheer chance is p^n where p is a number usually well under 0.8. The chances of say, 200 collisions is 0.8^200 = 4e-20, and on modern computers that would still run quickly. At a million hashes every second that statistically wouldn't happen once in a million years; and that assumes the table is full, and that's a small table by hashing standards. There's more chance of the hardware failing, by about a factor of 10^6.WolfKeeper 14:36, 16 October 2007 (UTC)Reply

But Big-Oh notation is based on the worst case and the worst case for hash tables would be poor or incorrect implementation/small n. The only reason hash tables never 'feel' like that is because only good implementations would be kept and reused. You can't ignore possibilities just because they aren't likely to occur when you're talking with Big-Oh.199.111.229.133 18:47, 16 October 2007 (UTC)Reply
But O(n) on a 50% full table with 1000 entries and a good hash function, it's not just "very rare" as in the text, it just never happens and never would happen before the end of the universe. If it has never been seen, and never would be seen, then it isn't rare, it just never happens.WolfKeeper 19:12, 16 October 2007 (UTC)Reply
It really is O(n) in the worst case - regardless of the table size or hash function, you can artificially construct a set of keys that all map to the same hash bucket. It never happens in practice - but a worst-case formal analysis must consider this case. Dcoetzee 21:33, 7 December 2007 (UTC)Reply
similarity --- quicksort is also O(n*n) in a similar manner (for few special cases) . 84.16.123.194 (talk) 15:44, 28 January 2008 (UTC)Reply

I feel like we are talking past each other, because we are talking about several different things but using the same name for all of them:

  • Many hash table implementations use some nice, fixed, nonrandomized, published hash function and always do power-of-two resizes so the table always has a good amount of empty space -- I'll call these "known hash tables" -- is there a better name?
  • Crosby and Wallach (if I'm reading them right) recommend a randomized algorithm that, each time a hash table is created, picks a fresh new random hash function for that table from some known, published family of hash functions. Then the algorithm uses that same hash function for as long as the hash table exists, which could be many years. My understanding is that the "secret salt" is intended to have effectively the same characteristics. I'll call these "fixed secret hash tables" -- is there a better name?
  • Several algorithms -- cuckoo hashing, hopscotch hashing, dynamic perfect hashing, etc. -- periodically switch to a *different* fresh new random hash function, from some known, published family of hash functions, often enough to prevent too many collisions. I'll call these "temporary secret algorithms" -- is there a better name?

I agree with Wolfkeeper that, with natural data, known hash tables are so unlikely to ever exhibit O(n) performance that "it just never happens and never would happen before the end of the universe." However, the hash table#Drawbacks section has a few references that imply that known hash tables are vulnerable to a DoS attack involving carefully crafted unnatural data that tricks the computer into the worst-case O(n) performance. Some of those references seem to recommend fixed secret hash tables, which still have worst-case O(n) performance, but make it practically impossible for an attacker to exploit that possibility unless the attacker knows the secret. However, other references imply that an attacker can learn the secret after an hour of probing, and then use that secret to send carefully crafted unnatural data that tricks the computer into the worst-case O(n) performance. Those references recommend temporary secret algorithms, which generally guarantee O(1) lookup times even in the worst case under active attack, at the cost of making the worst-case insertion time slower.

I feel this article needs to clearly distinguish between algorithms that have guaranteed O(1) lookup times even when under active attack, vs. algorithms that suffer from O(n) lookup times when under active attack. Is that difference big enough that these two categories of algorithms need to have two separate Wikipedia articles? --DavidCary (talk) 16:39, 29 July 2016 (UTC)Reply

I see that elsewhere on this talk page someone says "Dynamic perfect hashing methods like cuckoo hashing provide constant lookup time."

Is "dynamic perfect hashing methods" a better name for the entire category of algorithms that have guaranteed O(1) lookup times even when under active attack -- a category I previously called "temporary secret algorithms" -- a category that includes cuckoo hashing, hopscotch hashing, etc.? --DavidCary (talk) 15:52, 10 May 2017 (UTC)Reply

Confusing a hash table and a hash map

edit

The intro to this article is severely misguided and confusing. A hash table is not the same thing as a "hash map" (which is a Javaism meaning a map/dictionary implemented by a hash table instead of a tree). Hash tables have nothing to do with keys and values -- a hash table only knows that elements have hashes, and that elements are at the index in its array corresponding to the value of the hash function for an element. For example, a hash table can also be used to implement an unordered set. Additionally, the concept of "buckets" is not intrinsic to a hash table. It is a common strategy for resoloving collisions but there are others (double hashing, for instance).

What's the correct way to tag a page that has inaccurate information on it?

There are some non-wikipedia ghits that have the correct definition, such as: http://www.sparknotes.com/cs/searching/hashtables/section1.html

However, it's a fairly common misconception that hashes map keys to values, since so often they are used to implement maps or dictionaries. I am fairly sure that Knuth makes this distinction, but I don't have it on hand.

--67.180.15.227 (talk) 16:49, 5 December 2007 (UTC)Reply

I always thought the word "table" in "hash table" stood for for "lookup table", go figure. In colloquial usage, the word "hash table" nearly always refers to a table with values, e.g. a hash map. Furthermore, given an explanation of how hash maps work, it doesn't take much imagination to realize that one can also construct a hash table that only stores keys without values. So I think your assertion "severely misguided and confusing", is severely exaggerated and it doesn't really need a tag.
The URL you provided doesn't explicitly differentiate between "hash tables" and "hash maps" either; it seems that they only store the key in order to keep the diagrams simple.
But naturally the definition can be changed if you can find significant sources making this distinction. Sorry, but I can't be bothered searching for them at the moment. -- intgr [talk] 18:13, 6 December 2007 (UTC)Reply
A hash table can be used to implement either a map concept, or a set concept. MegaHasher (talk) 21:23, 7 December 2007 (UTC)Reply
Hash tables can be used to implement both sets and maps, and the word "hash table" does not imply any particular interface. "Hash map" may connote that it implements a map rather than a set, which is a qualitative difference, but this is a minor point that does not need expounding upon in the introduction. Dcoetzee 21:35, 7 December 2007 (UTC)Reply

Where did all the code go?

edit

I went reading through the article, as I do every few years since it represents my first ever contribution to Wikipedia, and noticed that the short pseudo-code for the deletion algorithm for a linearly probed hash tree had been removed. Standing in its place is a reference to the ICI implementation where the algorithm can be gleaned, but not so simply because it's handling a more complex case (set subtraction).

I just wondered why the clearer pseudo-code section was removed?

I confess to a feeling of ownership for the deletion algorithm, since that part of the entry came about after Tim Long and I discovered wikipedia. Some years earlier we'd been using a lot of hash tables, and it had bugged me that I could find no reference to an efficient deletion method. I worked out how to do it (I thought), only to discover that my method didn't always work. Tim debugged it (by adding a missing condition to the boolean expression.) We thought Wikipedia was the perfect place to put the small invention.

The Wikipedia isn't a place to put inventions, unless they are referenced by reliable sources.- (User) WolfKeeper (Talk) 17:07, 28 January 2008 (UTC)Reply

In fact, the pseudo-code for all the basic operations has been removed. I don't understand the logic behind that, hence this note. Lukekendall (talk) 15:08, 28 January 2008 (UTC)Reply

There's lots of different deletion algorithms that can be used, depending on what kind of hash table it is. It doesn't seem appropriate to include pseudocode for just one kind; instead the wikipedia's job is to talk about general principles, and refer to reliable sources such as Knuth that contain the pseudocode.- (User) WolfKeeper (Talk) 17:07, 28 January 2008 (UTC)Reply
That's not a practical approach for accepting software knowledge, because there are far more problems - and algorithms to solve them - than there are reputable published solutions. If there were, programming would be a mechanical process of referring to these gems and gluing them together! It isn't. (Unless you're working in the few-but-increasing extremely well-trodden problem areas.) Nor do you even need a "reliable source such as Knuth for code": if you publish the code, any programmer can try it out and verify that it works. The code itself is the most effective proof you can have: it's easier to understand than a formal proof of correctness, or acceptance-through-trust-my-reputation. It shouldn't matter whether it's an invention or not, merely that it's verifiably true.
As for the issue of of it only applying to one kind of hash table, let's consider that statement a little more deeply:
For any sort of chained hash table (which is not a pure hash table, and whose behaviour swings more to that of the underlying chaining data structure as the "load" increases), the deletion is effectively solved by using the deletion method for that chaining structure.
For linearly probed hash tables, it is a solution to a problem which as far as I know is not published elsewhere.
For quadratically probed hash tables there is no similar solution, as far as I know.
So objecting to it on those grounds seems the same as objecting to the inclusion of the solutions to the quadratic and cubic polynomials because the formulae are special cases that don't solve all polynomial equations.
Who removed the sections? I looked through the history and discussions but couldn't find it, and there'd be a lot of binary chopping to do to find it. Do you happen to know, off-hand?

122.106.85.3 (talk) 00:21, 29 January 2008 (UTC)Reply

Writing your own pseudocode and adding it to the wikipedia contravenes WP:Original research.- (User) WolfKeeper (Talk) 05:35, 29 January 2008 (UTC)Reply
I disagree - I think pseudocode is justified where it describes knowledge or methods that can be attributed to reliable sources. It's just a method of presentation. As long as it's not made too specific, there isn't an issue, and this is a widespread practice. Dcoetzee 22:42, 4 March 2008 (UTC)Reply
Dcoetzee is right: "No original research" means no original ideas. Original expression is different and required by copyright law. --Damian Yerrick (talk | stalk) 18:04, 29 December 2008 (UTC)Reply

For beginners

edit

This article is terrible - someone needs to make this article clearer for beginners or even intermediates. It barely explains anything at first. How about a simple example instead of jumping into advanced theory??? 129.128.241.131 (talk) 17:53, 7 February 2008 (UTC)Reply

I feel the same way. Someone (like me) who doesn't know what a hash table is for will not find out quickly. I had to read most of the article before I could figure it out. I would suggest something like this be added, either in the introduction and/or in the very beginning of the contents. (I'm no expert; someone who knows what they're talking about should write it.)

Say an array of available space for storing data has indices like 1, 2, 3, ..., 1000: ARRAY[1], ARRAY[2], etc. However, the keys to the data may be something entirely different, like "John Smith", "Lisa Smith", and "Sam Doe". If they would just be put into the array directly, some sort of search method would be necessary to find the desired entry. A hash table is a way of solving this problem, allowing extremely rapid lookup. In the example above, a pseudo-random function is called to assign each of John Smith, Lisa Smith, and Sam Doe to some number between 1 and 1000. If "John Smith" is assigned 873 by the hash function, its data is stored in ARRAY[873]. It can be retrieved immediately on demand by just recomputing the hash function on "John Smith" again locate the correct index. If the hashing function is well-designed, different keys being sent to the same number is unlikely until the array begins to fill up. At some point, of course, keys will begin to "collide", to be assigned the same array index. The hashing algorithm needs a way to find the right index anyhow.

--MikeR7 (talk) 21:07, 4 March 2008 (UTC)Reply

I agree. A gentle introduction describing a specific example could be very useful. I'll write something up. Dcoetzee 22:41, 4 March 2008 (UTC)Reply

Wow! MikeR7's introduction is great for me. And it would be so for many bigginers. Thank you! -- Kang —Preceding unsigned comment added by 59.17.69.101 (talk) 04:50, 28 August 2009 (UTC) It would be good to add to this an example of how the hash function might be constructed. —Preceding unsigned comment added by Lunedi9 (talkcontribs) 20:28, 6 November 2010 (UTC)Reply

Additional references?

edit

I can't access it at the moment, but this paper is probably also relevant to the robin hood hashing: Munro, J. I., and Celis, P. 1986. Techniques for collision resolution in hash tables with open addressing. In Proceedings of 1986 ACM Fall Joint Computer Conference (Dallas, Texas, United States). IEEE Computer Society Press, Los Alamitos, CA, 601-610. —Preceding unsigned comment added by 75.71.67.71 (talk) 17:02, 15 April 2008 (UTC)Reply

Open Hashing: inaccurate description

edit

The description of "open hashing" seems quite inaccurate.

That "data indexed by the hash is 'stored externally' to the hash table" may be *required* for this data structure, but does not define it. To store variable length data open addressing may be used in conjunction with storing references to the actual data in the hash table. Open addressing is used synonymously to closed hashing, which results in a contradiction.

The example which follows the first sentence seems at least incomprehensible to me (By the way: Examples should be separated from a general description.) Key-value pairs are added to the group/bucket according to the hash value of the key. (The hash of the key defines the group.)

"The key that identifies a bucket is now put into the hash table (in this case it is as simple as the bucket number)" seems wrong to me. Actually, a reference to the group is 'put into' the hash table. And this is not equal to the key (Hash keys are generally not stored in hash tables, the values are stored).

Separate chaining should be characterized as a form of open hashing.

Sorry for the extensive criticism. This part of the article really confused me...

--Whzlt (talk) 03:24, 18 May 2008 (UTC)Reply

I've removed it. It was just simply terrible, and seemed to be a duplication of the chaining section anyway, and was unreferenced.- (User) WolfKeeper (Talk) 04:11, 18 May 2008 (UTC)Reply

Look up is not O(1) at all

edit

The mathematical definition of O(1) is when n tends to infinity. All the demonstrations of O(1) I've seen assume at some point that the size of your data is bounded, or that you don't use your table at more than 80%, etc. But if your data gets really big (say greater than 10^10^10), either your collision rate will increase to 100% (then you tend to O(log n) ), or you'll have to increase your table size. The table size is about proportional to the data size. Then you'll have to compute a greater hashkey to uniquely identify the buckets. As it happens, the size of the hashkey (and the time it takes to compute it) grows with log n, where n is the number of buckets. So the operations on a hashtable are really O(log n), even if it stays reasonable for large data.--Yitscar (talk) 08:11, 24 May 2008 (UTC)Reply

Hash tables can be designed for any range of n. For that range of n, lookup is O(1). End of.- (User) WolfKeeper (Talk) 08:35, 24 May 2008 (UTC)Reply
I'm talking mathematically here. O() is a concept valid at infinity, not in a range. See Big_O_notation#Formal_definition
I understand that in that range computation time is independant of the size of data, whereas it wouldn't be for, say, quicksort. But I'm saying the Big O notation is not rigorous here.--Yitscar (talk) 20:31, 24 May 2008 (UTC)Reply
I don't agree. In any case, to change the article, you would need a reference.- (User) WolfKeeper (Talk) 22:46, 24 May 2008 (UTC)Reply
His point is valid - that in order for the average number of elements in each bucket to be constant, the table size must be a constant multiple of the number of elements, and consequently the hash function's output must have O(log n) bits. Nevertheless the O(1) lookup claims are so pervasive in standard reference works that we'd need to find a very authoritative source to contradict them for the purposes of this article. In practice what's more important of course is that it's much faster to generate O(log n) hash bits than it is to incur O(log n) cache misses, as in a binary search tree. Dcoetzee 01:29, 25 May 2008 (UTC)Reply
(Deleted my own comment -- yes, you'll always need log(n) bits to index into the hash table.) not-just-yeti (talk) 14:52, 17 July 2008 (UTC)Reply
Interesting comment, but unfortunately it would be OR without references. 124.101.249.63 (talk) 14:00, 17 July 2008 (UTC)Reply
Indeed, and despite looking over the whole internet I haven't found anyone who's 'got it right', so I'm not modifying the page for now--Yitscar (talk) 17:38, 20 July 2008 (UTC)Reply
Don't take the notion of "time"-complexity too literally. For most data structures, "time" refers to the number of key comparisons performed (see, for example, binary search trees), not to the number of bits which are compared. Likewise, time-complexity of hash tables refers to the number of hash-computations and memory lookups (each of which is considered to take take a constant number of operations), so it is in O(1). Adrianwn (talk) 08:40, 16 December 2008 (UTC)Reply

Wrong, all this talking about log(n) is fully wrong. Normal, linear search is O(n/2) with worst case O(n). Hash tables - as they are mostly implemented - have O(n2/m) where m is the size of the hash table. That's why crowded hash tables become as slow as normal search. But nowhere O(log n), forget about that! 178.197.232.59 (talk) 11:16, 22 October 2012 (UTC)Reply

load factor

edit

Sorry to be thick, but can we define it please? Thanks. 124.101.249.63 (talk) 14:01, 17 July 2008 (UTC)Reply

The Load Factor page defines it as:
Load factor (computer science), the ratio of the number of records to the number of addresses or indexes within a data structure
--Yitscar (talk) 17:38, 20 July 2008 (UTC)Reply

Info about multiplicative hashing is wrong

edit

The page points to some page that claims multiplicative hashing has poor clustering. But that page doesn't get Knuth's multiplicative hashing right -- it falls into the trap that many people do of thinking that it works as (k*a) mod m for some a. In fact, multiplicative hashing does fine if you implement it correctly, by taking only the high bits of k*a. —Preceding unsigned comment added by AndrewCMyers (talkcontribs) 16:45, 13 January 2009 (UTC)Reply

Sum over sequence

edit

For the x-th time, 1 2 4 8 ... n = 2n-1, not 2^n-1; you're thinking of 1 2 3 4 ... n. Just put in values for n and you will see this, or do you need a formal proof? Please people, actually read the text you edit. Adrianwn (talk) 17:22, 26 March 2009 (UTC)Reply

You should add a comment explaining why it is that, I misread the termination condition myself.- (User) Wolfkeeper (Talk) 17:33, 26 March 2009 (UTC)Reply
Yes, that is probably a good idea; I will think of something. Adrianwn (talk) 05:34, 27 March 2009 (UTC)Reply
I rewrote it and tried to clearly distinguish between the ops for the resizing and the ops for adding elements to the table. Please check the formulas for mistakes. Adrianwn (talk) 08:17, 31 March 2009 (UTC)Reply

Pseudo-randomness is not necessary

edit

This sentence has been removed:

In most cases the hash function is deliberately chosen to have pseudo-random properties, so that small changes of a key give a large and apparently random (although of course reproducible) effect on the hash returned. Because of this random effect, in some cases,

Hash functions need not be pseudo-random, they need only spread the keys as evenly as possible. Moreover collisions are not due to pseudo-randomness of the function. --Jorge Stolfi (talk) 05:59, 3 April 2009 (UTC)Reply

Congratulations! You seem to have successfully removed every part of the description of how hash tables in general work from the article!!! Well done indeed!- (User) Wolfkeeper (Talk) 23:03, 3 April 2009 (UTC)Reply
Perhaps you can remove the collision resolution algorithms as well?- (User) Wolfkeeper (Talk) 23:03, 3 April 2009 (UTC)Reply
Is this comment still relevant, or did you just look at the article between edits? I have tried to keep all the pertinent information, deleting only some details that can be found in the articles specific to each method. Is there anything in particular that you think should remain here? --Jorge Stolfi (talk) 00:46, 12 April 2009 (UTC)Reply
Yes, I can only continue to congratulate you on removing how they work, and adding more on what they are used for.- (User) Wolfkeeper (Talk) 14:23, 12 April 2009 (UTC)Reply
Um, forgive me for being dense... Are you being ironic, or do you mean it? If the former, please be more specific. There are gazillions of hash table algorithms, methinks that is is best to give a general survey and leave the details to specific articles. Do you think that the general description of how they work is not adequate? All the best, --Jorge Stolfi (talk) 16:53, 12 April 2009 (UTC)Reply
Yes, I was being ironic. People don't come here simply to read in-depth comparisons about something and something else. They come here primarily to find out what something is. What it can be used for, and how it compares and the history is also important, but are not usually the primary reason.- (User) Wolfkeeper (Talk) 16:58, 17 April 2009 (UTC)Reply
Well, first, I dont't think that is quite true. But it does not matter. What matters is that the lead section and the accompanying figure already give enough information about what a hash table is and how it works --- enough to satisfy the curiosity of most people who don't know that already. Anything beyond that will have to go into gory technical details, and will be of interest only to people who implement hash tables --- which is a very small set indeed, much smaller than those who need to choose between a hash table or a balanced tree. I also cannot believe that the average reader wants to know the details of chained versus open addessing before knowing what hash tables are good for. All the best, --Jorge Stolfi (talk) 22:40, 17 April 2009 (UTC)Reply
With very few very clearcut exceptions (like birthdates), the lead isn't supposed to contain anything not in the body of the article.- (User) Wolfkeeper (Talk) 23:36, 17 April 2009 (UTC)Reply
Sorry again, I don't understand your complaint. When I last looked, the lead section had a sufficiently clear and complete explanation of how a hash table works, and that explanation was throughly expanded in the body of the article. So what, exactly, was wrong with the latter? All the best, --Jorge Stolfi (talk) 02:09, 18 April 2009 (UTC)Reply

Why are prime-sized tables bad?

edit

The article claimed that

Some older hashes are even worse, requiring table sizes to be a prime number rather than a power of two, again computing the bucket index as the hash value modulo the table size. In general, such a requirement is a sign of a fundamentally weak function; using a prime table size is a poor substitute for using a stronger function.

This statement sounds rather biased. What matters is the actual hash function, that maps the key to a bucket index. A "raw" hash function that gives good results when taken modulo a prime is OK as long as the table size s is a prime. One can do dynamic resizing with tables of prime size, with liltle extra cost. The above statement can be turned around to say "using tables whose size is a prime number increases the chance that the hashes will be unformly distributed, since some popular hash functions are known to perform badly when the table size is a power of two and the hash index is obtained by bit masking." So, how should we rephrase this sentence? --Jorge Stolfi (talk) 00:40, 12 April 2009 (UTC)Reply

The "mod N" trick is inside the hash function, not outside it

edit

There is a misunderstanding in the recnt edits. The "modulo the table size" step is technically a part of the hash function, not of the hash table algorithm. If the bucket array has size s, the hash functiion must return a number in 0..s-1. See Talk:Hash function. Briefly, the mod-s trick is not always a good idea, and cannot be discussed separately from the "raw" hash function. More importantly, all discussion about the hash function (desired properties, meaning of "perfect", etc.) assume that there is no external mod-s step. All the best, --Jorge Stolfi (talk) 02:21, 18 April 2009 (UTC)Reply


At work I regularly explain to new developers the implementation of hash tables in our software product. I describe the computation prior to "mod N" as creating a fixed size digest of (some of) the entropy in the (typically larger, sometimes variable length) input value. Often we use a number of digest functions all returning the same type (typically an unsigned 32 bit integer). My developers readily grasp that the range of such digests is too great and must be transformed into an index into the N entry hash array, hence the "mod N" step. Making the size of the array prime simply ensures that all entropy in the digest participates in selection of the hash array position.

The independence of the digest step and the "mod N" step is underscored by the fact that N may change over time as values get added to the hash table, while the chosen digest function does not change. From experience I know that whenever we fail to distinguish clearly these two notions confusion ensues. 67.189.170.148 (talk) 21:20, 8 August 2012 (UTC) "John Yates" <[email protected]>Reply

Is the "Basic algorithm" section needed?

edit

The "Basic algorithm" section does not seem to add to what has aleady been said in the lead section. Considering that the "mod N" trick does not belong here, the section seems redundant. --Jorge Stolfi (talk) 03:01, 18 April 2009 (UTC)Reply

It's really bad style to only mention something in the lead; and in this case it's the fundamental algorithm the whole thing rests upon.- (User) Wolfkeeper (Talk) 16:14, 18 April 2009 (UTC)Reply

Advantages of "array hashing"

edit

Array hashing (separately chained hashing using dynamic arrays to stroe the buckets) is advantageous only in some limited conditions. If the load factor is low (< 1) and the entries are well-distributed, most buckets have at most one entry, so the cache provides little benefit. Also the dynamic arrays usualy require a "bucket length" field; if there is only one entry, then this extra field takes away the memory saved by omitting the chain links. On the other hand, if the load factor is high (>>1), and the bucket arrays are resized by doubling/halving, then the vacant slots will waste more memory than the chain links would. All the best, --Jorge Stolfi (talk) 23:47, 24 April 2009 (UTC)Reply


Dynamic arrays only require a "bucket length" field when they store 32-bit or 64-bit integer keys. It is not required for variable-length null-terminated strings (since strings are length-encoded).

The advantage of the array hash table is that it can scale much more efficiently than a chained hash table, with respect to both time and space. If a very low load factor is used (say 16 million slots and only 1 million keys), and considering that 32-bit integer keys are used, then the array hash table will consume no more space than the chained hash table, while still rivaling its speed. Once more keys are inserted, the array hash table will start to save space over the chained hash table, while retaining high access speed. The chained hash table, on the other hand, will start to consume more memory and slow down --- eventually, performance will get so bad that you will need to resize the chained hash table, further consuming more space and computing resources. This is not the case with the array hash table, which has been shown to scale well as the load factor increases, making it a more practical choice in a dynamic environment.

When variable-length string keys are used (as was the original design of the array hash table), even under low load, the array hash table will consume less space than the chained hash table, because it is cheaper to store a string in a dynamic array, then it is in a node of a linked list (as a result of memory allocation overheads and the requirement of a next-link pointer).

Also, the dynamic arrays are not resized in by doubling or halving. Dynamic arrays are grown in an exact-fit manner --- so they are only grown by as many bytes as needed, which is both fast and memory efficient.

cheers. —Preceding unsigned comment added by Hetori (talkcontribs) 23:26, 11 June 2010 (UTC)Reply

Dear readers,
Do any of you have any references to things which "are grown in an exact-fit manner", which Hetori calls "dynamic arrays"?
Those sound like they would be useful for a lot of memory-constrained things.
The static array article seems to imply that any kind of array that can grow is a "dynamic array".
Alas, the dynamic array article currently seems to imply that "dynamic arrays" always resize by doubling or halving or some other multiplicative amount; never "grown in an exact-fit manner". --DavidCary (talk) 05:55, 18 February 2022 (UTC)Reply
@DavidCary: Interesting question. AFAIK, you can achieve that using realloc() POSIX compliant C API. The implementation would involve you calling realloc() every time you add or remove an element into an array. However, the performance of that operation would be dreadful, since there are multiple memory reallocations involved. I couldn't think of any other implementation that doesn't involve realloc(). WikiLinuz🍁(talk) 06:23, 18 February 2022 (UTC)Reply
@DavidCary: So, if I understand your question correctly, a simple implementation to insert an element into an "exact-fit array" would look something like this:
#include <stdlib.h>
typedef struct {
    void **array_ptr; // underlying array
    size_t array_current_size; // size of allocation of `array_ptr` pointer
} array_structure;

int insert_new_element(array_structure *array, void *new_element) {
    array->array_ptr = realloc(array->array_ptr,   array->array_current_size * sizeof(void*));
    if (array->array_ptr != NULL) {
        array->array_ptr[array->array_current_size - 1] = new_element;
        return 0;
    }
    return -1;
}
WikiLinuz🍁(talk) 06:33, 18 February 2022 (UTC)Reply
Thank you. This is the thing I was looking for. What is the name of this thing? I added a brief mention of this kind of exact-fit array -- including the "realloc" that User:WikiLinuz mentioned -- at Dynamic_array#Variants. (I also added a reference to an article that, like User:WikiLinuz, *shows* some code that implements the same kind of thing -- but, alas, I don't see it *name* this kind of thing). But I feel like this is common enough (even if common only on teacher's blackboards as a step towards geometric-growth dynamic arrays, even if rare in actual production code) that this should be added to the {{List_data_structure_comparison}} template table -- as soon as we find a name for it. --DavidCary (talk) 07:00, 23 February 2022 (UTC)Reply
@DavidCary: Glad you found it useful. But this is mostly called "exact-fit vector/array" - there isn't any "formal convention", at least I wasn't aware of one. This type of exact-fit container is very specialized and only used in applications like embedded systems where memory is the superior priority compared to performance, as this type of realloc() would result in worst runtime performance, especially if add/remove are frequent operations on the container. WikiLinuz🍁(talk) 07:26, 23 February 2022 (UTC)Reply

O(1) in worst-case or amortized sense?

edit

Hi, a recent edit claims that hash tables (HT) allow aritrary insertions and deletions in O(1) worst-case sense. I am thinking of usual implementations where the table is doubled/halved whenever the load factor gets above/beyond fixed thresholds. For these implementations the insertion cost is O(n) worst-case but O(1) only in amortized sense.
Perhaps the editor is thinking of implementations that keep 2 (or 3) arrays and spread out the copying load over many ops? Those would indeed have O(1) worst-case insertion cost in theory, but the overhead seems quite large, and I wonder whether such solutions are really practical. Are they? If not, methinks that the claim of O(1) worst-case in that section would be misleading, since the edited sentence is about practical advantages of HTs. All the best, --Jorge Stolfi (talk) 03:30, 27 May 2009 (UTC)Reply

No, that was not claimed at all. The issue was that "amortized" should not be used together with "average" and "O(1)", because amortized analysis is based on worst-case, not average case. Insertion in a hash table, even if the table size never changed (we are not talking about that issue), is definitely not O(1) worst-case, amortized or not. Looking at the article, someone might conclude that it would be okay to say that insertion in a hash table was "amortized O(1)", which is definitely false. --76.173.203.58 (talk) 10:16, 27 May 2009 (UTC)Reply
Oops,yes, I forgot for a moment that the O(1) time for a "normal" (non-resizing) insertion is only average/probabilistic, not worst-case.
However, the word "amortized" (not "worst-case amortized") is still quite appropriate here, in theory and practice. Let's define an (n,m,a,b)-table as a hash table with dynamic resizing by a factor of 2, contaning n items, with m array slots, such that 0 < an/mb < 1. Let's say that a constant, in this context, is a number that does not depend on m or n (but may depend on a and b) Let's also define a random hash function as a function chosen from some large set, in such a way that it may map each item to any existing slot with uniform and independent probabilities. Then:
(1) If (n 1)/mb and the hash function is random, the expected cost of an insertion in an (n,m,a,b)-table is bounded by a constant A.
(2) If (n 1)/m > b and the hash function is random, the expected cost of an insertion in an (n,m,a,b)-table is at most B C n, where B,C are constants.
From (1) and (2) alone, one would conclude only that
(3) For any sequence of k distinct items, if the hash function is random and independent of the sequence, then the expected total cost for inserting those items into a (0,1,a,b)-table will be at most B k C k2/2.
But this is is obviously a very pessimistic estimate. By considering how the table state evolves during those operations, we can get
(4) For any sequence of k distinct items, if the hash function is random and independent of the sequence, then the expected total cost of inserting those items in a (0,1,a,b)-table will be at most D k for some constant D.
or, more geerally
(5) For any sequence of k operations (insertions,deletions, lookups), if the hash function is random and independent of the sequence, and a << b/2, then the expected total cost of performing those operations in an (n,m,a,b)-table will be at most D (k n) for some constant D.
Thus, if one allows arbitrary insertions and deletions, with dynamic resizing, one cannot claim that the expected cost for *one* *specific* insertion is O(1), even averaging over all hash functions. If the load factor is close to the threshold, the expected cost will be proprtional to the table size.
The cost is O(1) only if averaged over a long sequence of operations. To be precise, the expected cost (over all hash functions) is a term that depends on the initial table state, plus an expected cost per operation that is O(1) only when averaged over all operations in the sequence. To prove (4) and (5) one must use the same reasoning used in amortized worst-case analysis. Indeed, "amortized sense" does not imply "worst-case", it means simply that the analysis is applied to operation sequences rather than single operations.
And this was the intended meaning of the original claim that properly dimensioned hash tables provide "constant cost per operation, in the average (indeed, amortized) sense". Methinks that this statement is as correct and informative as one can hope to be in a lead section. All the best, --Jorge Stolfi (talk) 22:14, 27 May 2009 (UTC)Reply
tl;dr. You can't polish a turd, amortised is completely the wrong word.- (User) Wolfkeeper (Talk) 22:30, 27 May 2009 (UTC)Reply
Incidentally, for a properly working hash table with a thousand entries in and 75% full, O(n) is not worse case; it's never-going-to-happen case. The chances of hitting that case is so low my calculator cannot calculate it; it would not happen once in the life of the universe. The worst conceivable case is still O(1).- (User) Wolfkeeper (Talk) 22:36, 27 May 2009 (UTC)Reply
Your aggressive and dismissive stance doesn't help your case. Worst-case analysis is based on the worst case, however improbable it may be. It's not meant as a personal insult to hash tables, or even a practical consideration. There's no such thing as the "worst conceivable case" - you can talk about the 99th percentile case if you want, but it's not going to differ asymptotically from the average case. We also have to be very clear here about what operations are being counted - even in the average case, the O(1) is counting hash computations and table lookups each as constant-time operations. Dcoetzee 23:13, 27 May 2009 (UTC)Reply
Hash tables are statistical algorithms that have an expected run time of O(1) as n->infinity. The worst case never happens in practice unless you have a buggy implementation. It doesn't matter what the 'worst case' is; and amortized analysis assumes worst case.- (User) Wolfkeeper (Talk) 00:28, 28 May 2009 (UTC)Reply
Regarding the argument at hand: I believe the amortized cost of insertion differs depending on exactly what you're counting and whether or not the table is dynamically expanding. We really need to find an appropriate reference that goes into detail about this. Dcoetzee 23:41, 27 May 2009 (UTC)Reply
People want to know how hash tables behave in the real world. The amortized cost isn't it.- (User) Wolfkeeper (Talk) 00:28, 28 May 2009 (UTC)Reply
Sigh. Dynamically expanding tables are also O(1).- (User) Wolfkeeper (Talk) 00:30, 28 May 2009 (UTC)Reply

(unindent) Please stop the condescension, it's rude and annoying. Currently, this article contains no analysis whatsoever. The article should describe how hash tables perform under various models of analysis, including both the worst-case model where the hash function is unspecified and the more realistic model where it is a random variable over a sensible (say, uniform) distribution. Because the actual hash function is a parameter specified by programmer, these are useful in establishing bounds on performance and emphasizing the impact of hash function choice on performance. And as you know, the linear worst case performance emerges not from geometric table expansion, but from collisions. I'll add something, and I really hope I won't be facing resistance about it. Dcoetzee 02:19, 28 May 2009 (UTC)Reply

I don't plan on resisting at all; unless it's the bunch of unreferenced OR it sounds like, in which case I won't resist, I'll just revert it wholesale.- (User) Wolfkeeper (Talk) 03:25, 28 May 2009 (UTC)Reply
Okay. I went ahead and added some stuff with a source and you can edit it if you want. I understand your point of view about wanting to convey the practical concerns of hash tables that programmers need to worry about, but I'm also interested in speaking to other audiences such as researchers who wish to model hash tables formally. I hope you understand why I was frustrated with the way you treat people - you may disagree with other users on matters of presentation, or they may misunderstand characteristics of hash tables, but I still think it's worthwhile to take the time to listen to and address their concerns. Dcoetzee 04:15, 28 May 2009 (UTC)Reply
I can't understand Wolfkeeper's claim that "O(n) is never-going-to-happen case". With dynamic resizing (which is what is under discussion, and how most "general purpose" hash tables are implmented), resizing will happen as soon as one inserts more than the threshold number of items; and the cost of resizing is O(n), because all items have to be copied to the new table, and (in typical designs) rehashed too. If n is 750,000, m is 1,000,000, and the threshold is 75%, the expected cost of the next insertion is 750,000, not 1. If you start with 749,000 items, and do 2000 insertions, for the fist few ones the expected cost is 1, but you can bet that one of them will cost 750,000 units. Still the expected cost of the whole sequence does not exceed 740,000 2000 = 751,000 units --- a lot more than 2000×1 = 2000, but a lot less than 2000×750,000 = 1,500,000,000. And that, in very practical terms, is what the word "amortized" means. --Jorge Stolfi (talk) 06:55, 28 May 2009 (UTC)Reply
Because in the general case, you use incremental resizing. That does not have 750,000 resizings in a single step, it has for example, 1, and the next 1,000,000 steps will have one also.- (User) Wolfkeeper (Talk) 22:43, 26 July 2009 (UTC)Reply
That is not how dynamic resizing works. Resizing the table, by any amount, has a cost proportional to n, because the whole table must be copied (and rehashed) to a new location. That is how memory allocation works in any platform (including C's realloc, if that is what you have in mind). So if one resizes the table by 1 after each insertion, the cost will be n per isertion, or n^2 for inserting n items. In dynamic resizing, the table size is doubled whenever the load factor reaches a certain threshold. So the 750,000th insertion costs 750,000, but the next 750,000 insertions will cost 1 each. That is why one needs amortized analysis. All the best, --Jorge Stolfi (talk) 12:03, 27 July 2009 (UTC)Reply
No, that's not what I mean, read: Hash_table#Incremental_resizing, when you trigger a resizing, you allocate the array, but don't transfer the hashed data across immediately, instead you do it one at a time with each access. When the old table is empty then you deallocate it. This avoids the 750,000 operation insertion and gives only small statistical variations for any access.- (User) Wolfkeeper (Talk) 12:25, 27 July 2009 (UTC)Reply
Should this article say the "stop everything and rehash the entire table all at once" scheme (where occasionally an insert requires O(n) time) is a naive scheme never recommended for use in practice -- a scheme only used by teachers as an over-simplified example of some important parts of hash table algorithms, before getting into the complicated details of practical hash table algorithms, analogous to the way electronic codebook and overlapping subproblems start with simple examples that are never recommended for use in practice?
Is the term dynamically expanding tables mentioned by Wolfkeeper a good name for the entire category of hash table algorithms that expand the memory used at runtime as necessary, including many that are O(1) for each and every insert (not merely amortized)? --DavidCary (talk) 17:20, 10 May 2017 (UTC)Reply

"Good" hash functions

edit

From the lead paragraph of "Choosing a good hash function":

A good hash function is essential for good hash table performance... However, since poor hashing usually degrades hash table performance by a constant factor, and hashing is often a small part of the overall computation, such problems commonly go undetected.

If the latter is true, how can the former be true? If the problems caused by poor hashing commonly go undetected, and the hashing is a small part of the overall performance, it seems that a good hash function can't be called "essential for good performance" but is rather an aesthetic preference at most. One might distinguish between 'adequate' and 'bad' functions, where 'bad' might run slowly and produce uneven distributions, but if your problems are going undetected, I'd say you've at least struck 'adequate', and improving to 'good' would be an exercise in lily-gilding.
But perhaps I'm missing something. --Fader (talk) 14:51, 24 July 2009 (UTC)Reply

Well, it seems that the text can be improved. The intended meaning is: the difference between a good and a not-so-good hash function may be a factor of 10 or more in the average operation cost; so, if you want the hash table to have good performance, you'd better pay attention to that item. However, performance aside, the table will work correctly with any hash function; and, in many applications, making the lookup 10 times slower will probably make little difference in the overall processing time, because other costs will probably dominate the bill. For example, some shells use a hash table to map command names to directories in the command search path. A bad hash function there may add a few milliseconds to the command startup time, which is usually dwarfed by the cost of loading the program into memory. Such a bug is likely to go undetected for a long time; and even if i is detected, fixing it will be very low in the shell maintainer's priority list. On the other hand, some interpreted languages like Gawk use hashing to map variable names to values; in that context, even a 50% increase in the lookup cost will have a noticeable impact on the performance of every Gawk program.
Hope this helps. All the best, --Jorge Stolfi (talk) 23:42, 25 July 2009 (UTC)Reply

Example of a good hash function?

edit

The following text was recently added to the article:

begin text


An example of a simple hash function with good behavior is:

unsigned long f(char key[], int arrayLength) {
  unsigned long h = 0, v = 401;
  int i;
  if (arrayLength >= 2) {
    h ^= (0xffUL & key[0]);
    h ^= (0xffUL & key[1]) << 8;
    h = ((h   9) * (h   2) * v) >> 1;
  }
  for (i = 2; i < arrayLength; i  = 2) {
    h ^= (0xffUL & key[i]);
    h ^= (0xffUL & key[i   1]) << 8;
    h ^= ((h   9) * (h   2) * v) >> 1;
  }
  if ((arrayLength & 1)) {
    h ^= (0xffUL & key[arrayLength - 1]);
    h ^= ((h   9) * (h   2) * v) >> 1;
  }
  return h % N;
}

This function has the recommendable property that if arrayLength is 2, and N is 216, then the function is almost a perfect hash, filling 65531 of 65536 slots.


end text

This example needs more context before it is put in the article. Is there a reference for it? Also the comment below the code is confusing. All the best, --Jorge Stolfi (talk) 12:47, 17 February 2010 (UTC)Reply


"Also the comment below the code is confusing.": No, the comment below the text is not confusing. —Preceding unsigned comment added by Paulsheer (talkcontribs) 11:05, 18 February 2010 (UTC)Reply

N is the size of the hash table and array length is the length of the char array if i'm not mistaken. It still needs more info on hash time and comparisons to other hash functions. I think an example is needed to give the reader a better idea of what a hash function is.

Citations (April 2010)

edit

I agree, although I think that the following quote "Poor hashing usually degrades hash table performance by a constant factor,[citation needed] but hashing is often only a small part of the overall computation" was unnecessarily marked as needing a citation, as the truth of the assertion is self-evident. Consider a hash function f such that f(x) = 0 for all objects x. Obviously, this is the worst hash function possible, but it illustrates my point. Clearly, if the keys are comparable, then insertion and lookup will always have time complexity O(log n) rather than O(1). If the keys are not comparable, then insertion will have time complexity O(1) and lookup will have time complexity O(n) (since you can't binary search a list of items without an ordering). —Preceding unsigned comment added by 24.69.148.22 (talk) 01:39, 11 April 2010 (UTC)Reply

No, it is not self-evident, as you have shown in your example. The change from O(1) to O(n) complexity is a performance degradation by a linear factor, not by a constant one (as claimed in the article). The statement, that performance degradation due to a poor (but not pathologically bad) hash function is constant, does need a citation. – Adrianwn (talk) 05:28, 11 April 2010 (UTC)Reply

Oh, I see what you're saying. I assumed that "constant factor" meant a constant factor of n where proportional to the number of items mapped by the hash function. Perhaps the sentence should be edited for clarity? I think that whoever wrote that didn't mean to say that the performance degradation was in O(k*1) = O(1). I think the person meant O(kn) = O(n) for some "constant factor" k > 1. If that is what was meant, I am sure that we could both agree that it's more likely to be true than the claim that degradation is in O(1), which is should be rejected prime facie. —Preceding unsigned comment added by 24.69.148.22 (talk) 06:35, 11 April 2010 (UTC)Reply

Actually, I think that the original author meant a performance degradation by a constant factor k, as long as the hash function is suboptimal (and leads to more collisions than a better one) but not purposefully bad (like mapping all inputs to the same hash value). If that is the case, this statement needs a more detailed explanation. If no citation can be found, it should be deleted, although it appears somewhat plausible. – Adrianwn (talk) 06:40, 12 April 2010 (UTC)Reply
edit

My clean-up of the external links section was met with opposition, so let's discuss the individual items (I'll exclude the ones that have been removed again in the meantime):

  1. [1] – contains nothing that's not already in the article (see WP:ELNO #1)
  2. [2] – ditto
  3. [3] – ditto
  4. [4] – ditto
  5. [5], [6] – these might actually contain some valuable information that is not already mentioned in this article, but I don't want to go through 160 minutes of video to find out.
  6. [7] – I don't see the benefit of linking an implementation; if an example makes things clearer, then it should be in the article (in pseudocode).
  7. [8] – ditto
  8. [9] – ditto
  9. [10] – promotional link

I think that all these links should be removed, for the reasons given above (maybe except for #5). – Adrian Willenbücher (talk) 16:39, 23 September 2010 (UTC)Reply

I see benefit in linking to actual implementations. Actual code will often have more detail in it than an encyclopedia article should have. Actual code is also more concrete than psuedocode. Addition explanations (even if they go over the same ground) could help a reader. Some links may be be weak (perhaps sparknotes), but the NIST link appears to link to significant detail. I'll agree that the list is getting long; it should not list every implementation but rather ones with significant content. Glrx (talk) 21:44, 23 September 2010 (UTC)Reply
What would you propose are the ones that should be kept?
Regarding additional explanations: if they go into more detail than appropriate for an encyclopedic article, then I'm fine with a link; if it is not too detailed, then I would like to add the respective content to this article; however, I don't see the benefit of linking websites that don't explain more than already present here (and often do it worse). – Adrian Willenbücher (talk) 21:57, 23 September 2010 (UTC)Reply
Earlier, I peeked at all but your number 5. If I saw any content, then I kept the link. I'd have to study the links to form a more detailed opinion. (If it is any consolation, I have deleted NIST links in the past - they are often just weak dictionary defs. This NIST link has more substance and should probably stay.) The sparknotes link was disturbing because it was heavy on ads; if you were to look it over and decide that it didn't add any reasonable content, I would not object to your removing it. IIRC, it was a recent addition and that editor may want an explanation. Glrx (talk) 23:15, 23 September 2010 (UTC)Reply
Of course they all had content. The question is, whether the content they provide justifies an inclusion. According to WP:EL, it is generally more desirable to add content to the article (if possible and reasonable) instead of linking it.
You haven't given any reason why #1, #3 and #4 should be kept, except that they might contain some information which is not already present in the article. This is not enough to warrant an inclusion.
As for the links to actual implementations: it might be better to link to b:Data Structures/Hash Tables. – Adrian Willenbücher (talk) 06:40, 24 September 2010 (UTC)Reply

Auto archive

edit

Any objections to me setting up auto-archive on this page? It's getting rather lengthy, and I'd rather set it up to work automatically than keep doing it by hand. me_and 16:46, 23 September 2010 (UTC)Reply

Open addressing time

edit

Currently, it turns out from this article that open addressing is absolute nonsense (except, perhaps, because of caching). Actually it's not better so much because of insertion, deletion etc. speed, but because iteration speed (over whole table) is very good even with very non-uniform hashes. Thus, it's good for example to do duplicate check for values, which you do not insert too often, where the common operation is iterating over all values (for example if you want to iterate over table of size 100 000 each time you have added 20 elements). Consequently, rehashing can also be faster. qtvali --83.178.58.29 (talk) 22:03, 3 July 2011 (UTC)Reply

Separate chaining section -- Sequential Lookup is slower than Balanced Tree Search??

edit

"For example, a chained hash table with 1000 slots and 10,000 stored keys (load factor 10) is five to ten times slower than a 10,000-slot table (load factor 1); but still 1000 times faster than a plain sequential list, and possibly even faster than a balanced search tree."

This example and comment in the article seems to suggest that balanced search tree is even slower than a sequential list (linear search). This is absurd! :) — Preceding unsigned comment added by 151.197.168.146 (talk) 16:15, 9 July 2011 (UTC)Reply

I don't see the problem. The claim is plausible as a lookup in the "load factor 10" scenario involves computing a hash then linear searching 10 items. Apart from the fact that such an operation is probably faster than a balanced tree search of 10,000 items (13 comparisons), searching 10 contiguous items can be much faster than accessing memory in different pages if page swapping is involved. Johnuniq (talk) 01:25, 10 July 2011 (UTC)Reply
"This example and comment in the article seems to suggest that balanced search tree is even slower than a sequential list (linear search)." -- No it doesn't. A hash table at load factor 10 might be 1000 times faster than a sequential search, and 1% faster than a balanced search tree. The balanced search tree in that case is also approximately 1000x faster than a sequential search. And besides, it said "possibly". Theclapp (talk) 20:06, 9 August 2013 (UTC)Reply

The complexity table at the beginning does not define its variables

edit

You should define k to be the size of the hash table (the size of the underlying array ) and n to be the number of elements in the hash table, and unless you make the hypothesis that k < n, the space complexity should be O(k n) rather than simply O(n).

By the way, this page has a more detailed complexity analysis for hash table that is much clearer than the mess on the wiki page

Worse than that, the complexity table is inconsistent with the Performance Analysis section of the article. In the complexity table, the search and delete operations are O(1 n/k). In the Performance Analysis section, the article states: "For the best possible choice of hash function, a table of size n with open addressing has no collisions and holds up to n elements, with a single comparison for successful lookup, and a table of size n with chaining and k keys has the minimum max(0, k-n) collisions and O(1 k/n) comparisons for lookup." If these two parts of the article are in fact discussing different things, that should be made apparent.
Furthermore, the Performance Analysis section is completely missing the derivation and justification of the time complexity expressions in the complexity table. This doesn't need to be an exhaustive treatment, but should serve to remove ambiguity in the variables shown in the table. 12.9.138.10 (talk) 23:17, 16 October 2012 (UTC)Reply
I already wrote above about the problem. I think thay hash tables are often teached wrongly, and most of the so called "hash table" examples have only O(n). However, a real hash function takes the input data to get the key! It makes some operations on the input "key" (i.e. giving a integer representation of a string), afterwards a modulo over the hash table length, and then you get the real hash table key ("index"). Then the value is inserted at the index number's slot, together with the original key. If there is already another value there, the original keys are compared, and then the value is added to that slot if the original keys aren't equal, or the value will be overwritten. That's how you get O(n2/m), where m is the table size. All this works well only if you use an intelligent operation to transform original keys into table keys/indexes, and it certainly should't take a long time to calculate the index out of the original key (otherwise you would be faster to implement a linear search). While the tables themselves are just normal lists with an incremental index, nothing more, so the trick behind everything is the hash function. Or am I wrong in thinking that many people don't understand that correctly? But then why are they talking about log(n) or making crap hash table examples?? (Don't understand me wrong, this wiki article has correct graphic examples, but the explanations are somehow not clear...a clear explanation would be like what I just wrote here!) 178.197.232.59 (talk) 11:37, 22 October 2012 (UTC)Reply

"Hashing" section needs improvement

edit

The section needs to justify the use of a hash function: Why is a hash function even needed? Why not, for example, just store with each key the address of (pointer to) the value?

"The idea of hashing is to distribute the entries (key/value pairs) across an array of buckets."

The diagram above shows buckets separate from the keys, but this sentence seems to say each (full) k-v pair is in a bucket.

"Given a key, the algorithm computes an index that suggests where the entry can be found: [...]"

"Suggests"? Surely that's far too gentle a word. How about "gives" or "tells"?

"A non-uniform distribution increases the number of collisions and the cost of resolving them."

Given the definition of collisions in the introduction, it's not clear why they are a problem that requires resolution. It's also not clear how uniformity has anything to do with them (apart from the mysterious hash function); it would seem that either two keys point the same value or they do not.

JKeck (talk) 17:14, 31 July 2013 (UTC)Reply

JKeck: I'm not sure if you really have no idea how hash tables work, as it seems to be in reading your first sentence, or if the "Hashing" section really needs a better explanation. Of course linear search (unsorted) or binary search (if sorted) is slower than a lookup through a hashed index and then linear search through the collisions.

2nd problem: keys vs bucket vs entries. Each bucket stores one entry or a linked list of entries. Each entry stores k-v pairs. Both, the sentence and the graphics do make sense. I have no idea how to improve the wording to make it clearer to confused people. More graphics probably. 3rd: "Suggests" is a proper word, since the key is not always found at the calculated index. Sometimes it has to search for more locations, either in the same array (open addressing) or in a separate bucket list of colliding entries. Also your forth complain makes not much sense. It's very "clear why there is a problem that requires resolution", dependent on the quality of the initially calculated index, which is dependent on the hash function, the fill factor and the uniformity of the distribution. ReiniUrban (talk) 16:47, 28 August 2016 (UTC)Reply

I'm with JKeck. The section is unscholarly, and "suggests" is an unscholarly word. Hashing is a time/storage space trade-off: it takes a very long time to search a long list, and it takes a very large amount of storage to address directly with say, a 64-bit key (~10^19 slots). Hashing allows data storage and retrieval in a small and nearly constant amount of time, and storage only fractionally larger than that required for the records keys themselves. The primary requirements for a hash function are efficiency and uniformity. Since the section on Choosing a hash function doesn't mention efficiency, one may presume that a hash function that takes a geological amount of time to compute won't be a problem, or that in any case, one doesn't need to worry about such things. Perfect hashing is a combinatorial curiosity only, and given the paucity of real information in that section, it ought to just go. I could go on and on, but I'll stop here to more carefully read over the entire article, which sort of just rambles on. Sbalfour (talk) 20:30, 24 September 2019 (UTC)Reply

"Clustering concern?"

edit

Can anyone explain what the following is trying to get at?? The reference seems to be off line.

For open addressing schemes, the hash function should also avoid clustering, the mapping of two or more keys to consecutive slots. Such clustering may cause the lookup cost to skyrocket, even if the load factor is low and collisions are infrequent. The popular multiplicative hash[3] is claimed to have particularly poor clustering behavior.[7]

  • Any competent hash function will map two or more keys to consecutive slots, given the right keys.
  • It appears to imply the slowdown ("skyrocket"!?) is caused physical storage concerns rather than collisions. Given typical memory swapping schemes, adjacent physical addresses are more likely to available than not.
  • In an open addressing scheme, the location of the data record is independent of the location of the slot.

Hew Johns (talk) 17:44, 19 August 2013 (UTC) Okay, I see some of the problem "open addressing" == "closed hashing", i.e. in table storage... Hew Johns (talk) 20:08, 19 August 2013 (UTC) OK, never mind. After reading the whole thing I realize it is a very poor expression of the idea that a probe or re-hash function must also be uniformly unbiased, rather than clustering about the original hash. Hew Johns (talk) 21:26, 19 August 2013 (UTC)Reply

Wrong "highly optimized" Perl claim

edit

"Python's built-in hash table implementation, in the form of the dict type, as well as Perl's hash type (%) are highly optimized as they are used internally to implement namespaces."

This statement is misleading. Perl used a simple and especially insecure but fast form of a Jenkins OOAT hash until 5.6, when it was attacked by collisions against the simple linear chaining. The proposed fix was to use universal hashing, to keep performance and add security against hash collisions. Implemented was randomized seeding on rehashing instead. This trick was found to be broken with 5.17.6, and so all optimizations were thrown overboard and a rather slow and secure hash function was selected, SipHash, instead of choosing an optimized and proper hash function (like City or Murmur) and a fast and secure collision strategy. Python choose the same road and changed to SipHash. Perl went back before the 5.18 release to a bit faster JenkinsOOAT hash function, but this still didn't solve the problem of the wrong collision resolution. Optimized strategies would have been open addressing, universal hashing or perfect hashing. There are furthermore no other possible optimizations implemented, like using SIMD or HW assisted crc32 intrinsics on Intel processors, as CRC32 turns out to be the fastest and best hash function for the average case (least number of collisions). See http://blogs.perl.org/users/rurban/2014/04/statistics-for-perl-hash-tables.html

I propose to use the wording "Python's built-in hash table implementation, in the form of the dict type, as well as Perl's hash type (%) are used internally to implement namespaces and therefore need to pay more attention to security, i.e. collision attacks."

Highly optimized and concurrent hashes can be studied with Clojure, the Hopscotch table or Robin Hood. ReiniUrban (talk) 17:01, 12 April 2014 (UTC)Reply

Right. Using your own post as evidence - FAIL. — Preceding unsigned comment added by 68.183.92.186 (talk) 00:27, 14 May 2014 (UTC)Reply
Show me one counter-example of a hash implementation worse then perl5. You'll find none in open source. Claiming "highly optimized" for highly deoptimized is grossly misleading. I implemented many improvements in my fork, but it's still worse than ruby or any other typical naive unoptimized implementation. ReiniUrban (talk) 16:30, 28 August 2016 (UTC)Reply

Associative arrays

edit

Perl is not interpreted. — Preceding unsigned comment added by 68.183.92.186 (talk) 00:17, 14 May 2014 (UTC)Reply

array hash table

edit

Why link to an non-existent article? — Preceding unsigned comment added by 75.79.3.84 (talk) 20:09, 15 May 2014 (UTC)Reply

Some of the good reasons for linking to an article that does not yet exist are listed at Wikipedia: red link. --DavidCary (talk) 04:25, 24 May 2015 (UTC)Reply

Examples

edit

This article needs simple examples up front so that novices can get the general idea. 2602:306:35FA:D540:2487:9F59:8A65:3095 (talk) 04:44, 27 January 2017 (UTC)Reply

Nest Robin Hood hashing and 2-choice hashing under Open Addressing?

edit

Right now Robin Hood hashing and 2-choice hashing are listed as a direct children of "Collision Resolution". They seems to fit the definition of Open Addressing resolution strategies. Since all other open addressing strategies are listed under the Open Addressing subsection, shouldn't these two techniques be moved there too?

Andreas Lundblad (talk) 07:13, 10 July 2019 (UTC)Reply

Lead

edit

The lead says: Ideally, the hash function will assign each key to a unique bucket, but most hash table designs employ an imperfect hash function, which might cause hash collisions...Such collisions must be accommodated in some way. .

We don't live in an ideal world, so starting out with a statement about ideal is supercilious. The implication is that if a hash function is 'imperfect', like an assembly line making imperfect parts, why don't we fix it so it's 'perfect'? An imperfect hash function might cause hash collisions, but the sense of this statement is vague, and beggars the question, why don't we build one that 'might not' cause hash collisions? Easy enough to postulate, eh? We might lead a novice into actually attempting to do that. I wish him luck.

How about this in place of the whole paragraph: Hash functions are designed to be efficient to compute and minimize collisions, duplicate hash codes resulting from distinct keys. 19 words versus 48. And it's very informative. The lead is not the place to go into what happens in a collision: we may assume, and indeed we find, a whole section talking about just that. Hmmm... or we could add, Hash functions are accompanied by an adjunct collision-resolution method. Now I think we've got a substantial digest. Scholarly diction is an art. Sbalfour (talk) 21:47, 24 September 2019 (UTC)Reply

"Perfect hash function" is a term of art - https://en.wikipedia.org/wiki/Perfect_hash_function. So the characterisation of non-injective hash functions as "imperfect" here isn't meant as a slight against them or a value judgement on their quality or usefulness, it's just a jargon way of saying that they have collisions.
I think I nonetheless agree with you that the passage feels a bit weird to me, for a few reasons:
  • It doesn't link to Perfect hash function, and it's non-obvious what "imperfect" means here unless you either happen to already know the jargon or you get a spidey sense that it's a jargon term and google it
  • It's kind of tautological. "most hash table designs employ an imperfect hash function, which might cause hash collisions" - yes, that's literally what being imperfect means.
  • The "ideal" scenario is simply impossible in the extremely common scenario that there are infinitely many possible keys (e.g. keys are strings of arbitrary length). You cannot have a perfect hash function with an infinite domain.
ExplodingCabbage (talk) 17:33, 10 July 2024 (UTC)Reply

Let's try again. The lead says, In many situations, hash tables turn out to be on average more efficient than search trees or any other table lookup structure. For this reason,.... What reason? It's a dangling participle, with look-thru ambiguity, it's not attached to anything. If we turn the sentence on its head, we get: Hash tables are more efficient in many situations than other things because <reason>. We need to state the reason, then we can say, For that reason,... How about Hash tables provide nearly constant access time and compact storage for large numbers of records, that can't be efficiently stored or accessed by other methods. There's ordered and unordered lists, linked lists, double-linked lists, circularly-linked lists, tries, heaps, stacks, queues, and etc, too numerous to mention in the lead, so don't try. If it matters - and it's worth a paragraph or maybe a sub-sub-section - there we distinguish the situations where we might use, other methods. Sbalfour (talk) 22:19, 24 September 2019 (UTC)Reply

I don't understand this second point of yours. It's not a dangling participle, it's unambiguous that the reason being talked about is that hash tables are more efficient than other stuff, and I have no idea what "look-thru ambiguity" / "look-through ambiguity" is meant to mean because I find almost no Google hits for either term (indeed this Talk page is the ONLY hit for "look-thru ambiguity"). ExplodingCabbage (talk) 17:35, 10 July 2024 (UTC)Reply

Robin hood hashing - clarity

edit

Robin hood hashing is entirely analogous to the three ways the chains may be ordered in chained hashing: as unordered lists, as serially ordered lists (alphabetically or by some other key-dependent criteria), or as self-ordering lists by probe frequency (i.e. key-independent criteria). So, we reshuffle the bucket sequences for collided keys just as we reshuffle the chains. Yeh? I think we lost sight of the forest for the trees on this one. Sbalfour (talk) 22:38, 24 September 2019 (UTC)Reply

Variable-sized data and chained hashing

edit

Would this be a good citation for the better performance of variable-sized data in chaining hash tables? https://www.researchgate.net/profile/Haval_Ahmed2/post/Any_ideas_about_hash_table_implementation/attachment/59d6430bc49f478072eabc10/AS:273806436831232@1442291949107/download/In-memory hash tables for accumulating text vocabularies.pdf ?

First sentence of section "2.3. Hash tables" seems to be useful, but it is not backed up in the article itself.

circular reference

edit

Recently someone used the "Inside the latency of hash table operations" by Andy Ke as a reference for this article. I feel this is perilously close to becoming a WP:REFLOOP, because many sentences and illustrations in that article are a word-for-word identical to (an earlier version of) this "hash table" Wikipedia article. (A few of of those sentences are sentences I wrote for Wikipedia). What should we do about this? --DavidCary (talk) 23:24, 6 May 2020 (UTC)Reply

Concrete example needed

edit

I spent several hours documenting a simple example of a hash table ([11]), only to have it deleted one week later with the edit summary "rm unsourced code, we don't need that here". I agree that it's unsourced, but I disagree with the assessment that we don't need a code example here. I'll never forget my first visit to this article many years ago -- before I had experience with hash tables -- and found the article seriously lacking because it didn't show a single functional example of an actual hash table. I realize that my example may not be acceptable because -- even though I personally tested it and determined that it functions exactly as explained -- it might be considered OR and therefore verboten. Having said that, I don't really care if my example is restored or some other example is used instead, but I would argue that simple functional example is essential to help explain this topic. Lambtron talk 22:50, 7 February 2022 (UTC)Reply

You shouldn't be using your favorite programming languages for writing samples; it should be written in language agnostic way at its dedicated article. If you found a C example to be helpful, maybe you can host a blog or take private notes. But such things aren't utilitarian in our Wikipedia articles. WikiLinuz🍁(talk) 23:52, 7 February 2022 (UTC)Reply
Thanks for the friendly and supportive reply. I should have known that simple C examples are not useful in Wikipedia, and that only language agnostic examples are permitted. Per your advice, I will abandon my unworthy attempts to improve your article and instead start a blog. Hopefully, someone more enlightened will -- at some time in the future -- add an acceptable, actual example to help explain this topic. Lambtron talk 14:32, 8 February 2022 (UTC)Reply
You're welcomed to try again at Perfect hash function article, but not in C. There are language agnostic ways to write pseudocode that involve bitwise operations. If your example was notable enough, it wouldn't be hard to find a scholarly journal article where the code might be discussed (and we could source it from there). WikiLinuz🍁(talk) 15:03, 8 February 2022 (UTC)Reply

It's not the approach I would have taken, but I can see the appeal of simply deleting my contribution rather than working collaboratively to improve it. For example, I would have used my knowledge of "Wikipedia-mandated" language agnostic coding to translate the code -- a job an expert could probably do in a few minutes -- and left intact the explanation. But that's just me: averse to throwing out the baby with the bathwater. BTW, it's not clear to me why you think this basic example is not appropriate for this article, or why a "scholarly journal article" is required for the simple calculations involved. Frankly, I find it a bit frustrating that you seem opposed to having an easily understood example here when there's such an obvious and compelling need for one. Lambtron talk 16:51, 8 February 2022 (UTC)Reply

Hash table (and other data structure-related articles) are candidates for WP:GA. Your source code should be sourced, and "trust me bro" isn't constructive; it's also WP:UNDUE in this article, that's why I said that you could try again at the dedicated article. A source code certainly doesn't fall under WP:Routine calculation. Wikipedia is not a repository, if you really want C code samples, maybe WikiBooks is appropriate for you. WikiLinuz🍁(talk) 17:06, 8 February 2022 (UTC)Reply
You have successfully found 10000 reasons for excluding examples from this article (apologies to Edison). Apparently examples in specific languages are prohibited; these must be translated to pseudocode before they can be considered for inclusion. The translated code would require a citation from a scholarly journal, because a straightforward explanation alone cannot be trusted. Even after translating and citing, a sole example is inherently WP:UNDU because it implements a particular type of hash function, and therefore it cannot be used here, to explain hash tables. Finally, after clearly stating that my goal is to have a basic illustrative example in this article, you flippantly assume that what I really want is C samples -- and suggest that I visit WikiBooks to satisfy that need. I suggest a different approach: proactively work with me to enhance this article with a simple example that helps to make hash tables easier to understand. Lambtron talk 19:47, 8 February 2022 (UTC)Reply
Like I precisely stated, if you can't cite a scholarly source, and wanted me to comply with your "trust me bro" policies, we have nothing to talk about. Any user, including an IP, can challenge an unsourced OR. If you find a source that implements your example in XYZ programming language, and you translate that into pseudocode, it isn't OR in this case. You cannot publish your own thought, no matter how much you're in love with the example. I'm all ears if you could find a source.
straightforward explanation alone cannot be trusted - WP:VERIFYOR WikiLinuz🍁(talk) 20:01, 8 February 2022 (UTC)Reply
I'm glad to hear that you've been listening, with open mind, to my concerns about the need for a simple practical example to meet readers' needs. Lambtron talk 04:36, 9 February 2022 (UTC)Reply

Collision Resolution Chart is Original Research

edit

Sometime in the past the following chart/image was added to the article:

 
The graph in question.

. There are several problems with this graphic but the biggest is that by its own description it appears to be original research. Per the WP:NOR policy, no original research is allowed in Wikipedia articles. If the poster (User:Dcoetzee, apparently) would like to update it with a reliable, published source for its specific data, then I will withdraw my objections. Otherwise I will remove it in the near future. — Preceding unsigned comment added by RBarryYoung (talkcontribs) 14:51, 9 February 2022 (UTC)Reply

@RBarryYoung: It was added by Amakuha in this revision diff. WikiLinuz🍁(talk) 21:16, 9 February 2022 (UTC)Reply
Wikipedia doesn't require references for diagrams. What's your specific concern with the accuracy of the diagram? GliderMaven (talk) 21:33, 9 February 2022 (UTC)Reply
My specific concern is that it is a violation of the WP:NOR policy which I have read and which has no exceptions for diagrams in general let alone for charts of quantitative data that makes specific claims which this graphic definitely does. If you know of some rule that says that quantitative data charts are allowed to be based on original research then please point me to it. Otherwise it is in violation of Wikipedia policy. RBarryYoung (talk) 12:24, 11 February 2022 (UTC)Reply
@RBarryYoung and WikiLinuz: This diagram is not meant to be precise. It just explains the general tradeoff between separate chaining and linear probing. I think, it's pretty self-evident and useful.
Though, "separate chaining with head records in the bucket array" looks to be an optimal way, which combines the best of the two approaches. --Amakuha (talk) 07:07, 10 February 2022 (UTC)Reply
Nowhere in its presentation in the article is it qualified as being imprecise. And in fact is makes very precise claims (irrespective of the accuracy of those claims) and comparisons about the performance of two different implementation methods. And regardless of that, it doesn't matter whether I agree or disagree with its claims, it is original research and does not belong here. RBarryYoung (talk) 12:24, 11 February 2022 (UTC)Reply

Introduction is incorrect

edit

Introduction sentence and the claim that Hash tables are implemented as associative arrays is incorrect. Even the linked reference says that associative arrays are implemented as hash tables, not the other way around. Currently associative array and hash table articles are in an infinite loop as each claims they are implemented using the other. 2001:7D0:87E3:100:C528:86D0:1049:5205 (talk) 06:18, 27 October 2023 (UTC)Reply

The article currently says, in the introductory sentence:

In computing, a hash table, also known as a hash map, is a data structure that implements an associative array, also called a dictionary, which is an abstract data type that maps keys to values.

which says that a hash table implements an associative array, i.e. that an associative array is implemented as a hash table not the other way around. Guy Harris (talk) 06:44, 14 November 2023 (UTC)Reply

Hash digest

edit

Where   is the hash digest ...

This explanation in section Hashing by division is unclear because the term hash digest is not explained and   does not appear in the equation above it. (Also, it should probably not be capitalized because it is not a complete sentence.)

In the following section, Hashing by multiplication, it is the other way around:   appears in the equation but is not explained. Gnib Bnil (talk) 14:55, 5 June 2024 (UTC)Reply