Title

Are you tired of reasoning through mountains of data with slow queries?


LitenAI is a Big Data Copilot to observe, reason and act on large amount of data using a natural chat interface. It has developed expert AI & accelerated data agents with workflows to reason and visualize large amount of data. It uses Generative AI to do today's engineering tasks using natural chat interface. It accelerates big data query for an interactive data chat experience.

 

MULTI AGENT PLATFORM WITH AI AND DATA AGENTS


Architecture

LitenAI is a Silicon Valley startup championing a cutting-edge multi-agent platform designed to observe, reason, and act upon data. Its founders possess extensive expertise in AI and data applications and have a proven track record of accelerating platform development for superior performance. They bring valuable experience in optimizing Apache SparkTM performance for large-scale distributed analytics.


CORE FEATURES

  • LitenAI platform enables 100x efficiency by automating engineering work using AI and providing a conversational interface to data on the public cloud or on-premise.

  • It can be used on a standalone installation or, within your Jupyter notebooks or, can be accessed from a Slack chat using Liten bots.

  • Its cloud engineering domain GenAI platforms are pre-trained, use best in class data and AI tools. As customers use the tool, it learns to become smarter.

USER INTERFACES

  • LitenAI standalone chat acts as a smart data copilot. Users can observe data by asking query questions. They can also reason it out using natural languages.

  • LitenAI can be installed using pip in Jupyter notebook. You could use a single send call to do all your reasoning. LitenAI copilot orchestrates the actions using a master agent to an appropriate data or AI agent.

  • Liten is available as a chat bot for Slack. You can add Liten as a bot in your Slack, and chat to it to get answers for your data.

 

USE CASES

BIG DATA REASONING

Modern enterprises are at a cross roads with the advent of Generative AI. LitenAI's mission is to help enterprises tap into the vast data and knowledge within these companies. For that mission, it has developed a big data copilot with integrated Gen AI agents and big data base clusters to enable much faster response times.

turbocharge

TURBOCHARGE PRODUCTIVITY

At one of our client sites, training new engineers necessitated the presence of experienced ones. LitenAI optimized this process by leveraging existing playbooks and cloud server datasets to refine the models. Additionally, it incorporated customer-specific prompt engineering to elicit expert, human-like responses from the Liten platform.

A newly onboarded engineer could inquire about debugging status code errors, various log files, or inquire about the most significant issues from the past month and receive expert, human-like responses.

natural-language

NATURAL LANGUAGE BASED REASONING

Customers often feel hesitant about learning a new query language. The Liten platform addresses this concern by offering a natural language interface for query specification. Its code AI agent translates these specifications into SQL queries, enabling customers to use them seamlessly. Moreover, the platform integrates sophisticated visualization tools, empowering users to create and visualize cloud-based dashboards effortlessly. All these functionalities are orchestrated through a master agent, accessible via a unified natural language interface.

Liten's models serve as invaluable aids to all cloud engineers—CIOs, SREs, DevOps engineers, and more—acting as co-pilots to significantly enhance productivity.

event-publish

STREAMLINED EVENT PUBLISHING

In one of our customers, many alerts are being issued for the same event. Liten can streamline the flow. For this, it stores the alerts. Using its models, it can detect similar events and produce consolidated messages in Slack bots. The flow looks like this

  • Multiple alerts generated for one event.
  • Liten consolidates alerts into its event group.
  • Consolidates it into Slack channels and conversations.
  • LitenBot in Slack answers questions about all the events. It uses finetuned agents for targeted answers.

SAMPLE USER REASONING SESSIONS

LitenAI possesses the capability to comprehend diverse log file formats. Its adaptable data model allows for seamless extensions, facilitating the comprehension of customer-specific data. The finely-tuned models within LitenAI are adept at understanding domain-specific knowledge. Moreover, LitenAI can fine-tune models using customer data exclusively for their specific use cases.

SERVER LOG REASONING

Server Error Plot Chat

SERVER LOG REASONING

LitenAI models are finely tuned to execute server log reasoning with an expertise akin to an expert. Here are several examples showcasing the range of analysis it can perform.

Internal server error analysis

Users have the ability to upload their data for analysis through LitenAI's platform. Additionally, LitenAI manages SQL tables and ingests data into these tables. This enables customers to execute queries, create visualizations, and deduce insights from the data. Moreover, LitenAI conducts advanced analysis and coordinates both data and AI agents to fulfill various tasks for the users.

Internal Server Error

LitenAI is capable of analyzing data and offering error analysis as part of its functionalities.

Customers performed various tasks using chat interfaces like

  • Enquire about the count of 200, 300, 400, and 500 status codes observed by the service in the past hour. Also, seek this information for a specific time range between time x and time y, using aggregated time based queries.
  • Determine which day of the week typically experiences the highest traffic.
  • Identify the time of day when traffic is at its peak.
  • Request information about the increase or decrease in traffic volume over the past 12 months.


LINUX LOG EXPERT ANALYSIS

Syslog Error Chat

LINUX LOG EXPERT ANALYSIS

The Linux operating system generates diverse log files including system logs, application logs, and event logs. LitenAI comprehends these log files and has the capability to analyze failures, offering potential solutions accompanied by code.

Analysis of syslog error

Interpreting Linux system logs can be challenging. Liten stores these logs and conducts comprehensive analyses to derive valuable insights. Refer to the following chat for an example of the analysis.

Users have the ability to inquire about different aspects of Linux log data, such as

  • Crafting a script to parse syslog data and flag RAM as corrupted if more than 10 errors occur within a 24-hour period.
  • Determining the frequency of RAM-related CRC errors across all syslogs.
LitenAI streamlines the analysis of extensive datasets through innovative AI and accelerated data agents, simplifying complex reasoning tasks. It unlocks pioneering solutions by harnessing next-generation tools.

ACCELERATED BIG DATA OBSERVATION

Modern enterprises are experiencing an unprecedented surge in data creation, where vast amounts of information are generated constantly. This expansion coincides with significant advances in cloud technology, marked by disaggregated systems. While there are significant and continuous hardware advancements, the organizations struggle to meet the ever increasing demands for performance improvements, reduction in cycle times and to save on computational and cloud costs. The organizations are also in a race to combat climate change and meet their sustainability targets.

Employing a unique tensor representation, the data agents within LitenAI can accelerate the queries by a huge multiplication factor of 50-100x thus helping the organizations achieve the twin goals of performance with sustainable computing while also saving costs in a competitive environment.

Accelerated Data Agent

ACCELERATING DATA AGENT BY 100x

Current relational and tabular data platforms lack adaptation to leverage emerging technologies. Employing a unique tensor representation, the data agents within LitenAI accelerate queries by a factor of 100. They transform incoming data into a tensor-formatted columnar structure, optimizing processor and accelerated solutions to enhance the speed of existing queries. LitenAI seamlessly integrates into Spark clusters and can efficiently ingest data from various data warehouses.

LitenAI seamlessly operates within existing or new SparkTM clusters as a service. Once activated, it utilizes jar files in Spark tasks to perform accelerated functions through LitenAI. Jobs executed by customers maintain their settings but experience enhanced performance. LitenAI accelerates various tasks such as filters and joins, typically found in commonly used SQL query plans. Additionally, LitenAI can construct customized accelerated UDFs (User Defined Functions) tailored to individual customer requirements. These UDFs can be applied within queries or used as standalone functions, offering valuable solutions for specific customer use cases.

One of our clients hosts extensive network traffic data in the cloud, reaching volumes that can span into petabytes becoming cost prohibitive. LitenAI offers a big data solution capable of storing limitless log data. It employs industry-standard open data lake format delta-lake for data storage. Specifically, in this scenario, data is stored in partitions categorized by traffic timestamps.
To address company needs and inquiries, they regularly query these files to detect policy violations. However, this process became notably slow, hindering their ability to meet service level requirements. To tackle this, LitenAI incorporates an acceleration layer by storing data in a tensor-formatted file. This diminishes the necessity for cross-referencing and joining with additional data dimensions. As a result, Liten expedited the customer's query, delivering the sought-after policy answers promptly.

BENCHMARK TPCH QUERY

The Transaction Processing Performance Council (TPC) sets industry-standard benchmarks for data and query performance. TPC BenchmarkTM H is a decision support benchmark. Benchmark tests are conducted for Query 5 and Query 6 of TPCH because they have complex joins and result in longer query plans. These tests involved utilizing both a standalone Spark cluster and a separate Liten service. Liten enhanced query performance through a tensor-based engine and preserved an in-memory cache of tensors to eliminate redundant creation processes.

TPCH Query 5

TPCH Query 5

TPCH Query 5

This query compiles the revenue generated from local suppliers.

LitenAI expedites this process by eliminating the necessity for joins. Instead, tensor data replaces joins with more straightforward multi-dimensional lookups. This streamlines operations by minimizing shuffle operations, resulting in a significant acceleration of the query.

TPCH Query 6

TPCH Query 6

TPCH Query 6

This query assesses the potential revenue growth achievable by removing company-wide discounts.

With LitenAI, the query plan is simplified along with accelerated scan/aggregates. On a standard Azure DS2 VM, Spark 3.2 required 16 seconds to execute, whereas Liten completed the task in a mere 0.06 seconds, delivering over a 100x performance improvement.

LitenAI solutions enhance Spark performance for large-scale distributed analytics. They can be utilized either as a standalone service or added to an existing customer cluster as an additional service.
 

ABOUT

Liten

LitenAI, a Silicon Valley startup, champions a cutting-edge multi-agent platform designed to observe, reason, and act upon data. Its founders possess extensive expertise in AI and data applications and have a proven track record of accelerating platform development for superior performance. They bring valuable experience in optimizing Apache SparkTM performance for large-scale distributed analytics.

Contact Us

Fill up the form below to send us a message.

Please provide your first name.
Please provide your last name.
Please provide your email address.
Please provide your phone number.
Please enter your message.