Sentiment Analysis and Text Classification

Server Maintainance
Must Read
Feel free to try
Model Card
This application showcases the utilization of transfer learning with BERT in a multi-task model. Initially, the model has three different heads for three distinct tasks, one for sentiment analysis, one for text-classification and one for an unsupervised mask language model (MLM) task. The MLM head will only be used during the training process. Through this model architecture and training approach, we've achieved remarkable results, closely matching current state-of-the-art (SOTA) models for Vietnamese text classification and sentiment analysis, all within a single model.
As this model is still in the development phase, it's important to note that the predictions may not be entirely accurate, and there may be undetected bugs present. However, we are actively working to refine and improve its performance to ensure the best possible outcomes for our users.
Label Explanation
Must Read
Modified Result
In text classification tasks, the Others label often presents ambiguity. As a result, we have chosen to deactivate this label when the frequency of the second-highest label meets a predefined threshold. To view the unaltered predictions, you can simply click on the Modified button.