Machine learning has helped researchers from MIT and elsewhere to explore the possibility of building a better hash function. Their findings reveal how database searches can be optimised with a custom-designed hash function.
Researchers discovered that data collisions might be reduced by employing trained models instead of standard hash functions. Learned models are produced by applying a machine-learning algorithm to a dataset to identify key features. The trials performed by MIT researchers and elsewhere also showed that learnt models were frequently more computationally efficient than ideal hash functions.
“In this study, we discovered that there are circumstances in which it is possible to find a more optimal compromise between the time required to compute the hash function and the likelihood of collisions. In these cases, the computation time for the hash function can be increased a little. Still, at the same time, its collisions can be significantly reduced,” Ibrahim Sabek, a postdoc in the MIT Data Systems Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the paper’s co-lead authors asserted.
Because hashing is used in many contexts, including database indexing, data compression and cryptography, hash functions must be quick and efficient. Several online databases use hashing, such as library catalogues and e-commerce websites. Codes produced by a hash function indicate the data’s potential storage location. It is hence less demanding to seek out and get the information when employing these codes.
Traditional hash algorithms create codes arbitrarily. Therefore, two data bits can have the same hash, which leads to collisions. The collisions occur when a user tries to find a piece of specific information but receives results for many files with the same hash value. It takes a lot more time to zero down on the correct one, which slows down searches and decreases performance.
Perfect hash functions are a class of hashing algorithms optimised to insert data in a way that eliminates the possibility of collisions. However, they are labour-intensive to build for each dataset and slower to calculate than regular hash functions. With this new information, it should be possible to decrease the number of accidents. Thus, the method might speed up computing systems used by scientists to store and evaluate biological information like DNA, amino acid sequences, and so on.
Learned models may reduce the proportion of collisions in a dataset from 30% to 15% when data are distributed reliably, compared to conventional hash functions. They even managed to outperform ideal hash algorithms in terms of performance. Learned models can cut execution time by as much as 30% in the best circumstances.
Throughput was shown to be primarily affected by the total number of sub-models when researchers investigated the usage of trained models for hashing. Each trained model is made up of several simpler linear models, each of which provides an approximation of some portion of the data distribution. The learnt model’s approximation improves with additional sub-models, albeit at the cost of increased processing time.
A minimum number of sub-models must be used to construct the approximation required for the hash function. As a result, Sabek believes that the benefits of this approach of reducing collisions will plateau beyond a certain point.
Researchers aim to extend this work by applying learnt models to create hash functions for new data classes. The group also intends to investigate learnt hashing for transactional databases. This type of data update necessitates a model revision. However, revising a model without sacrificing accuracy is challenging.
“We’d want to inspire the community to include machine learning into their standard algorithms and data structures. Then, we can apply machine learning to capture data attributes better and achieve higher performance with virtually any fundamental data structure. “There is still a lot we can investigate,” Sabek added.