
Introduction
Contact us to get the latest python package.
All data is local and accessible only to the user. LitenAI does not collect any data from user.
Prepare Your Environment
This Python package has been tested on Linux. For other operating systems, you can run it locally using LitenAI Docker image.
- Ensure that Java version 17 is installed and in use. If not, do the following.
sudo apt install openjdk-17-jre-headless
export JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64/
- Ensure python version is greater than 3.12
- Create a virtual environment for LitenAI under your preferred directory (in the command below replace /path/to/venvs with your desired install location)
python3 -m venv /path/to/venvs/litenai
source /path/to/venvs/litenai/bin/activate
Install LitenAI package
In all the following commands, replace <version> with the actual version provided by the installer.
- Extract the LitenAI package as given below; it should contain three items: a Python wheel file, a config file, and a lake directory.
tar -xvzf litenai-<version>.tgz
- Change to the extracted directory, then install the LitenAI package using the provided .whl file. This will install all necessary LitenAI packages and the executable.
cd <litenai-version>
pip install litenai-<version>-py3-none-any.whl
- To install AutoGluon for CPU-only usage, use the following command:
pip install autogluon.tabular --extra-index-url https://download.pytorch.org/whl/cpu
Setup LitenAI Config files
LitenAI is highly configurable. Before the first run, edit the litenai-config.yml file.
- In the LLM section, add your openai_api_key.
llm:
# LLM API key to for API calls
openai_api_key: "<Insert your key here>"
- In the smartlake section of litenai-config.yml, replace /exact/path/to/lake with the local directory where the lake was extracted. All data will be in your local storage provided below.
liten:
# Liten lake root location
smartlake:
lake_url: 'file:/exact/path/to/lake'
lakestore: 'file:/exact/path/to/lake/litenailakestore.json'
# log files path
_log_dir: "/exact/path/to/lake/logs"
Start the LitenAI local service
- Launch the service using the following command:
litenchat.sh -c ./litenai-config.yml
Once started, the service is typically available on port 8210. The URL and port are also printed in the terminal. On the first run, it discovers all ingested tables, which may take a few minutes. Subsequent starts should be much faster.
Open a Chrome browser and navigate to http://localhost:8210.
http://localhost:8210
It will launch the chatbot. You can find the login IDs in the litenai-config.yml files for reference.
Try out example chats
The installation includes a pre-loaded Smart Lake database. Explore various chat examples in the Articles section. These are some of the sessions available for you to try it out.
You can ask anything about ingested data or anything else. You can also ingest your data and ask LitenAI about this data.
For any issues or inquiries, feel free to contact us.