Skip to main content

What is AutoNLP?

Machines must utilise complex data processing and latest machine learning algorithms to comprehend natural language. Depending on the language and the type of data the process can vary, as well as the algorithms. It is also not trivial to determine which machine learning method to apply based on a given dataset or task and the research community constantly comes up with new, more accurate methods.

For example, numerous machine learning methods and NLP pipelines can be used to classify text documents. However, do you keep questioning the following

🤔 Which one will give me the best results?
🤔 How much data will each process need to train on?
🤔 Which pipeline will scale best with a very high throughput?
🤔 How complex is it to put each of them in production?

Additionally, the entire language processing stack might be different for different languages. Even if you have all the components together there is no guarantee you will get the best results.

AutoNLP

To address all these challenges we came up with AutoNLP: an algorithm that figures out which pipeline, which features, and which model will give the best results on your dataset and with your scaling requirements.

YOU ONLY NEED TO FEED IT WITH DATA, AND IT DOES EVERYTHING ELSE FOR YOU AUTOMATICALLY!

Since creating datasets is itself a difficult task, we have made our AutoNLP highly data-efficient. Different services have different variants of AutoNLP and they also have different minimum data requirements. For example, Language Understanding (Service Code: nlu) needs 20 training examples, and Neural Translation needs 400-500 training examples.

This means you don't need to have any machine learning or data science knowledge to train your AI models.

You focus on the business problems you are solving and let the NeuralSpace Platform take care of all the language processing and machine learning.