Revolutionizing AI efficiency: Meta AI’s new approach, READ, reduces memory consumption by 56% and GPU usage by 84%

https://arxiv.org/abs/2305.15348

Multiple natural language processing (NLP) tasks were completed using large-scale transformer architecture with state-of-the-art results. Large-scale models are typically pre-trained on generic web-scale data and then optimized for specific downstream goals. Several gains, including better model prediction performance and sample efficiency, have been associated with increasing the size of these models. However, the cost of tuning these models is now out of reach for most people. Since 2018, the cost of advancing AI technology has been unattainable due to the exponential growth in model size relative to GPU memory.

To overcome the difficulties of fine-tuning all the parameters, efficient parameter transfer learning (PETL) has emerged as a viable option. Parameter-efficient transfer learning techniques seek to efficiently fit the parameters of pre-trained models to the target task using smaller, more task-specific models. However, these approaches either increase the inference delay or save a negligible amount of memory during training.

A new Meta AI study addresses these issues by introducing REcurrent ADaption (READ).

Check out 100s AI Tools in our AI Tools Club

READ adds a small recurrent neural network (RNN) to the backbone model and a network of joiners that combines information from numerous sources to provide input to the RNN to overcome PETL constraints. It requires few parameters and a minimal amount of memory.

Before using READ, the method performs a forward pass through the transformer backbone, where intermediate results are cached at each level of the transformer. The hidden states RNN are then calculated iteratively in the encoder and decoder stages. The new final state is calculated by summing the RNN and backbone outputs.

Since READ is recurring, trainable parameters will not grow with deeper backbone levels, resulting in lower processing requirements. Consequently, the suggested tuning procedure relies solely on RNNs and feed-forward networks (FFN) rather than an attention mechanism. By omitting pretraining and elimination, usability and training efficiency are both improved.

The researchers compare READ to basic PETL methods, including BitFit, Prompt-tuning, LoRA over GLUE, and other multiple natural language processing benchmarks and comprehensive optimization approaches. READ outperforms various fine-tuning methods on the GLUE benchmark in accuracy, while reducing model training memory consumption by 56% and GPU power usage by 84% compared to full tuning . The results also suggest that READ is a highly scalable, backbone-size-independent approach for tuning huge transformers.

As mentioned in the paper, the team could not expand the backbone due to processing power limitations. The researchers plan to fine-tune additional Llama-7B READs and possibly larger variations in the future. According to the researchers, one of the drawbacks of READ is that it often takes more epochs than competing PETL algorithms to converge on small datasets. This means that when there are few data points to work with, even though READ is more efficient at calculations per unit of time, it can provide small gains in total consumption. They plan to investigate READ on the low-data regimen. The team believes that READ will open up the process of fine-tuning huge models to a wider audience of scientists and developers.

Check out thePaper.Don’t forget to subscribeour 22k+ ML SubReddit,Discord channel,ANDEmail newsletter, where we share the latest news on AI research, cool AI projects, and more. If you have any questions regarding the above article or if you have missed anything, please do not hesitate to email us atAsif@marktechpost.com

Check out 100s AI Tools in the AI ​​Tools Club

Tanushree Shenwai is a Consulting Intern at MarktechPost. She is currently pursuing her B.Tech from Indian Institute of Technology (IIT), Bhubaneswar. She is passionate about Data Science and has a keen interest in the application scope of Artificial Intelligence in various fields. She is passionate about exploring new advancements in technologies and their real-life application.

Ultimate Guide to Data Labeling in Machine Learning

#Revolutionizing #efficiency #Meta #AIs #approach #READ #reduces #memory #consumption #GPU #usage

Leave a Comment