LITENAI RELEASE NOTES
INTRODUCTION
LitenAI is an AI engineer automating technical support and debug.
LitenAI platform orchestrates customized AI agents based on user prompt to observe, reason and visualize data at petabyte scale. Through its data layer, it can connect to external sources or ingest structured data, logs, and metrics. It also integrates customer knowledge bases from text, PDFs, HTML, and other media formats, using this information to deliver in-depth, contextually relevant insights drawn from both the data lake and securely integrated knowledge sources. By shifting technical support to an AI-first model, LitenAI reduces response times to critical issues by up to 100x, effectively bypassing traditional engineering bottlenecks.
TRY IT NOW!
Now available in early access, you can try our chat interface by logging into LitenAI account. LitenAI can be deployed on public clouds with pre-configured agent workflows Users can also evaluate it locally as a Docker image within secure data networks. If you need an account or for early access, please contact us.
DOCKER INSTALL DIRECTIONS
Contact us to get the
latest docker image.
Once downloaded follow the following directions to run it locally.
All data is local and accessible only to the user. LitenAI does not
collect any data from user.
docker load < litenai.version.tar.gz
docker image ls
STARTING LITENAI SERVICE
To run the docker to analyze application logs set OpenAI API Key variable.
Set API key to a valid OpenAI API key if using OpenAI. Set it to the API
key if a local install is being used. It should be OpenAI API compatible.
export LITENAI_API_KEY="api_key"
Now you can run the docker command for log reasoning.
Replace the latest with the correct tag.
docker run -d --name litenai_container -p 8210:8210 -p 8220:8220 -p 8221:8221 -e LITENAI_API_KEY=${LITENAI_API_KEY} litenai/litenai:latest
It takes a few minutes for LitenAI service to start. See below for some
sample log reason chats. In the chatbot, in the side panel, you would also
see saved chats.
You can double click on them to load and see. You can chat with LitenAI as you
would with a support and debug engineer.
From the browser, you can open this location and start using the LitenAI.
localhost may need to replaced with the url location needed for your machine.
http://localhost:8210
It will ask for login credentials.
Please contact us
to get the login credentials.
LitenAI agents can be tuned to customer requirements. By default, the docker
file provides a local lake with AI agents tuned for application log reasoning.
For log reasoning you dont need to set these variables.
However, if you wish to try out the mode for debug assistance for field equipments
such as medical devices, LitenAI provides an agentic onfiguration with sample
lake tuned to assist field technicians providing technical support for medical
and industrial device. If you wish to use in this techassist mode,
please use the following docker run command. Replace the latest with correct tag.
docker run -d --name liten_container -p 8210:8210 -p 8220:8220 -p 8221:8221 -e LITENAI_API_KEY=${LITENAI_API_KEY} -e LITENAI_AGENTIC_MODE="techassist" -e LITENAI_LAKE_URL="/srv/lake/techassist" litenai/litenai:latest
These agentic modes are data driven configurations and can be tuned to customer
specifications.
LitenAI can also use locally served LLMs. We also provide pre-built
images to serve in customer networks. We can help with deploying and serving
LLMs of interest. If being served locally, the url to be used for LLM call and
LLM Model being used are needed. LitenAI uses an OpenAI API calls for all LLMs.
An example setting is shown below.
export LITENAI_SERVE_URL="http://localhost:8000/v1"
export LITENAI_LLM_MODEL="meta-llama/Llama-3.2-1B-Instruct"
This is the docker run command. You can use docker environment variables to
use the correct run modes.
docker run -d --name liten_container -p 8080:8080 -p 8210:8210 -p 8221:8221 -e LITENAI_API_KEY=${LITENAI_API_KEY} -e LITENAI_SERVE_URL=${LITENAI_SERVE_URL} -e LITENAI_LLM_MODEL=${LITENAI_LLM_MODEL} -e LITENAI_AGENTIC_MODE=${LITENAI_AGENTIC_MODE} -e LITENAI_LAKE_URL=${LITENAI_LAKE_URL} litenai/litenai:latest
APPLICATION LOG REASONING SESSION
You can follow the following chat script to understand the agents and lake.
The lake contains ingested documents as well as tables. User prompts are underlined in the text below.
Let us list the tables and understand the table we would be working on.
List all the tables.
Describe imapserverlog and webaccesslog tables.
Count the number of rows in imapserverlog and webaccesslog tables.
Ask for the plan here to debug highest latency values.
Can you give me a plan to debug high latency value? Latency is measured
by timestamp difference from imapserverlog to webaccesslog table. These tables
are joined by requestid fields. Keep all the fields of both the tables in the
output with names prepended with table abbreviations. Give me the SQL
selecting highest 500 latency values.
Now you can execute the generated SQL.
execute the generated sql
Ask for an analysis.
Can you analyze the possible causes of high latency?
Drill down for a specific error code.
I want to focus on status code. Could you modify the sql to select
all rows with the status codes indicating error like 404, 500 etc.
Execute the sql now.
Look at the plot. It gives a data explorer on the output dataset.
Plot the data.
You can also serve the plot on a different tab for better view.
serve the plot.
Now ask for top error.
Can you tell me one top error code for latency degradation? Limit your selection to one error code.
Ask for resolution.
Looks like 500 is the real problem. Suggest some resolutions for this.
You can ask for more plans and drill down further as needed.
MEDICAL EQUIPMENT MAINTENANCE SESSION
You can follow the following chat script to understand the agents and lake.
In this chat, a technician is performing maintenance on Baxter medical pump.
I need help with Baxter infusion pump preventive maintenance
Assist me by naming all possible preventive maintenance for Baxter
SIGMA Spectrum Infusion Pump. Do not provide details.
Query the servicerecord to show me the PM needed for this pump.
LitenAI looks into service History and outputs the PMs due right now.
Pump Operation Test is due right now.
Show me the last 3 PMs done on this pump for last 12 months.
LitenAI looks into service History and outputs the list of PMs and Dates
completed. In the next prompt, ask to use rag agent only. This way, it will
pick the information from ingested manuals.
Use rag agent to provide me exact names of test that are part of Pumps Operation Test. Dont include any details on the steps.
This performs a retreival and augmentation to give answer from ingested manuals.
Show me the exact setup steps for Pump Operations Test.
Do not provide details.
Chatbot answers from summary in the manual including the setup this image.
From Baxter test manual show me the Test Setup Image for Pump Operation Test.
Show me the exact initial steps for Flow Rate Accuracy Test without any
details.
LitenAI answers from summary in the manual including the setup this image.
Show me exact steps for Volumetric Test for Baxter SIGMA Pump without any
details. Show the image of test also.
Provide me exact steps for Battery Capacity Test without any details
Assist me with the Preventive Maintenance Check Sheet for Pump Operation Test.
Assist me to generate the completed/filled Flow Rate Accuracy Test Report in text
Please generate the completed/filled Battery Capacity Test Report in text
INGEST UNSTRUCTURED DATA
To add unstructured data, such as PDFs, text, HTML, or other media files, select the tab named "Lake" from the main display. Choose the file under the heading:
Ingest unstructured data like PDF, text, HTML, etc.
Click "Ingest." This will ingest all the data and index it with embeddings to enable semantic search as well as Retrieval-Augmented Generation (RAG) flow.
You can add your knowledge base, including PDF manuals, troubleshooting guidelines, standard operating procedures, etc. LitenAI agents then use this data to refine their responses. Note that none of this data is shared with Liten or LLM; it remains entirely within your customer lake.
INGEST STRUCTURED DATA
To add structured data from files containing logs or tables, first select the "Lake" tab from the main display. Refer to the following section:
Ingest CSV or JSON. Infer new structures from file and create table below.
All ingested structured data will be appended to the selected table.
If a table with the same structure already exists, select it from the top. Choose the file. Supported file types are CSV or JSON. Select the options, then click "Ingest" to add the data to the tables. The "Chat" tab will show the response.
If a table matching the file structure does not exist, it must be created first. Select the table from the top; a sample JSON structure will appear in the section titled:
Table Structure
To infer this from the selected file, enter a unique table name below, then click "InferDesc." This will display the inferred table structure JSON above.
Sometimes, timestamp columns are not inferred correctly. If timestamp data is important, you may need to add it manually. Use the following fields:
"timeseries": "timestamp", "timeseries_format": "yyyy-MM-dd HH:mm:ss.SSS"
Click the "Create" button.
If successful, this will add the table to the Table list and set it as the active table.
Now you can ingest the file by clicking the "Ingest" button below the "Ingest CSV" line.
Your structured logs are now added to the lake. These logs can be analyzed using LitenAI agents as described above. Try out your reasoning scenarios.
CONNECT TO YOUR DATA SOURCE
LitenAI can read or stream data from existing data sources. It does not need to be all ingested in the lake. Please contact us to know how we can ingest your data sources.