Published on

InternLM- An Open-Source Language Model for Practical Scenarios

  • avatar

InternLM is an open-source language model developed by InternLM. It is designed to provide a powerful knowledge base and reasoning capabilities for practical scenarios. In this blog post, we will introduce InternLM, discuss its performance evaluation, explore the model zoo, and look at deployment options.


InternLM is a language model that has been trained on trillions of high-quality tokens to establish a powerful knowledge base. It supports an 8k context window length, which enables longer input sequences and stronger reasoning capabilities. InternLM also provides a versatile toolset that allows users to flexibly build their own workflows.

One of the key features of InternLM is its lightweight training framework. This framework supports model pre-training without the need for extensive dependencies. With a single codebase, InternLM can be pre-trained on large-scale clusters with thousands of GPUs and fine-tuned on a single GPU, achieving remarkable performance optimizations. During training on 1024 GPUs, InternLM achieves nearly 90% acceleration efficiency.

Performance Evaluation

InternLM has been comprehensively evaluated using the open-source evaluation tool OpenCompass. The evaluation covered five dimensions of capabilities: disciplinary competence, language competence, knowledge competence, inference competence, and comprehension competence. The evaluation results are available on the OpenCompass leaderboard.

The evaluation results show that InternLM performs well across various datasets and models. For example, on the CommonSenseQA dataset, InternLM achieved a score of 75.2, outperforming other models such as LLaMA-7B and Baichuan-7B. These results demonstrate the effectiveness and versatility of InternLM in different tasks and scenarios.

Model Zoo

InternLM has open-sourced two models: InternLM 7B and InternLM Chat 7B. These models have been trained using InternLM and are available in two formats: InternLM format and Transformers format. The InternLM format allows for further pre-training or human preference alignment training, while the Transformers format enables seamless integration with various open-source projects in the community.

To load the InternLM Chat 7B model using Transformers, you can use the following code:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True).cuda()
model = model.eval()

response, history =, "hello", history=[])

In addition to the Transformers format, InternLM also provides a frontend interface for interacting with the InternLM Chat 7B model. You can run the provided code to start the frontend interface and have a conversation with the model.


InternLM can be deployed using the LMDeploy tool. LMDeploy provides a one-click deployment solution for InternLM, making it easy to deploy and have conversations with the model. The deployment process involves installing LMDeploy, exporting the model, and starting a server to interact with the deployed model.

For more details on deploying InternLM using LMDeploy, please refer to the deployment tutorial.


InternLM is an open-source language model that leverages trillions of high-quality tokens for training. It provides a versatile toolset for users to build their own workflows and supports pre-training on large-scale clusters. The performance evaluation of InternLM demonstrates its effectiveness and versatility across various tasks and datasets. The model zoo provides access to pre-trained models in both InternLM and Transformers formats. Deployment options, such as the LMDeploy tool, make it easy to deploy and interact with InternLM. Overall, InternLM is a powerful language model that can be used in a wide range of practical scenarios.