top of page

How it works

Deep Learning, Faster

Neural Network Architecture Optimization

After preparing the data, machine learning typically works by splitting the data into two pieces: a training set, and a test set. The training set is used to fit the model and the test set is used to evaluate the performance of the model. 

chevron.png
cycle7_edited.jpg

In a Neural Network, there are other architecture parameters that have to be set such as the number and size of hidden layers.

These parameters are often set by repeatedly training the neural network with different parameters and then comparing the results. This is called hyper-parameter tuning and is very time consuming, multiplying the total model building time by a factor of 100 or more.

Quante Carlo's unique and proprietary technology speeds up the optimization process by 75%* when compared to the current, second best technology.

With the cost of training a Deep Learning model reaching into multiples of $10MM, this equates to a significant cost savings and market place advantage.

bottom of page