The Power of Combining Traditional ML with Large Language Models

published on April 25, 2025

Combining traditional ML with LLMs

The Power of Combining Traditional ML with Large Language Models

Ever wondered how to get the best of both worlds when it comes to AI? Traditional machine learning excels at crunching numbers and finding patterns in data, while Large Language Models (LLMs) are incredible at understanding and generating human language. Combining them creates a powerful synergy that can transform your data science projects.

In this guide, I'll walk you through practical approaches to integrate specialized ML systems with LLMs like Ollama, focusing specifically on time series analysis. You'll learn how to leverage traditional ML models for precise predictions while using LLMs to interpret results and create interactive user experiences.

Why This Integration Makes Sense

Traditional ML frameworks like TensorFlow and PyTorch are exceptional at processing structured data and making numerical predictions. Their specialized architectures (like LSTMs) are perfect for capturing temporal patterns in time series data.

Meanwhile, LLMs shine at:

  • Understanding natural language requests
  • Explaining complex results in human terms
  • Providing conversational interfaces to technical systems

By combining these technologies, we create systems that benefit from both computational precision and natural language capabilities.

Getting Started with Ollama

Ollama makes integrating open-source LLMs into Python applications straightforward. Here's how to get started:

  1. Install and run Ollama
  2. Pull a model: ollama pull llama3.3
  3. Install the Python library: pip install ollama

Basic usage is simple:

import ollama

response = ollama.generate(model='llama3.3', prompt='Why is the sky blue?')
print(response['response'])

For more interactive applications, try the chat functionality:

from ollama import chat

response = chat(
    model='llama3.3',
    messages=[{'role': 'user', 'content': 'Why is the sky blue?'}]
)
print(response['message']['content'])

Four Powerful Integration Strategies

After working extensively with these technologies, I've identified four effective strategies for combining ML systems with LLMs:

Strategy 1: ML Predictions as Input to LLMs

Train your ML model, generate predictions, and let the LLM interpret them. This approach is perfect when you need human-readable analysis of complex forecasts.

Strategy 2: Function Calling for On-Demand ML Predictions

Use Ollama's function calling capability to trigger ML predictions during conversations. This creates a seamless experience where users can request forecasts or analyses through natural language.

Strategy 3: LLMs for Contextual Enhancement of ML Outputs

Have your ML model do the heavy lifting with predictions, then use LLMs to add context and explanations that make the outputs more valuable and accessible.

Strategy 4: Hierarchical Decision Systems

Use ML models for precise predictions and LLMs for high-level decision-making based on those predictions. This mimics how humans combine detailed analysis with strategic thinking.

Practical Code Examples

Let's dive into practical implementations. I've created examples that showcase different integration approaches.

Example 1: Time Series Forecasting with TensorFlow and Ollama Interpretation

This example uses an LSTM model for forecasting and Ollama to interpret the results:

import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
import ollama

# Generate synthetic time series data
def generate_time_series(n_samples=1000):
    time = np.arange(n_samples)
    seasonal = 10 * np.sin(0.1 * time)
    trend = 0.01 * time
    noise = 2 * np.random.randn(n_samples)
    signal = seasonal + trend + noise
    return time, signal

# Create and prepare data
time, signal = generate_time_series()
df = pd.DataFrame({'time': time, 'value': signal})

# Prepare data for TensorFlow model
def create_dataset(data, time_steps=1):
    X, y = [], []
    for i in range(len(data) - time_steps):
        X.append(data[i:(i + time_steps)])
        y.append(data[i + time_steps])
    return np.array(X), np.array(y)

# Normalize data
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
signal_scaled = scaler.fit_transform(df['value'].values.reshape(-1, 1))

# Create sequences
time_steps = 10
X, y = create_dataset(signal_scaled, time_steps)
X = X.reshape(X.shape[0], X.shape[1], 1)

# Split data
train_size = int(len(X) * 0.8)
X_train, X_test = X[:train_size], X[train_size:]
y_train, y_test = y[:train_size], y[train_size:]

# Build LSTM model
model = tf.keras.Sequential([
    tf.keras.layers.LSTM(50, activation='relu', input_shape=(X_train.shape[1], 1)),
    tf.keras.layers.Dense(1)
])
model.compile(optimizer='adam', loss='mse')

# Train model
history = model.fit(X_train, y_train, epochs=20, batch_size=32, validation_split=0.1, verbose=1)

# Make predictions
predictions = model.predict(X_test)

# Inverse transform predictions and actual values
predictions = scaler.inverse_transform(predictions)
y_test_inv = scaler.inverse_transform(y_test)

# Calculate errors
mse = np.mean((predictions - y_test_inv) ** 2)
rmse = np.sqrt(mse)

# Prepare a summary of the forecasting results for Ollama
forecast_summary = f"""
The time series forecasting model produced the following results:
- Mean Squared Error (MSE): {mse:.4f}
- Root Mean Squared Error (RMSE): {rmse:.4f}
- The model was trained on {len(X_train)} samples and tested on {len(X_test)} samples.
- The model shows {'good' if rmse < 3 else 'moderate' if rmse < 5 else 'poor'} performance.

Can you analyze these results and provide insights on the model's performance?
"""

# Use Ollama to interpret the results
response = ollama.generate(
    model='llama3.3',
    prompt=forecast_summary
)

print("Ollama's Interpretation of Results:")
print(response['response'])

Example 2: Anomaly Detection with Function Calling

This example demonstrates how Ollama can call an anomaly detection function during conversation:

import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, LSTM, RepeatVector, TimeDistributed
from tensorflow.keras.models import Model
import ollama
import json

# Function to detect anomalies in a given time range
def detect_anomalies(start_date, end_date, threshold=0.1):
    """
    Detect anomalies in a specified time range using an LSTM Autoencoder.

    Args:
        start_date (str): Start date in YYYY-MM-DD format
        end_date (str): End date in YYYY-MM-DD format
        threshold (float): Threshold for anomaly detection

    Returns:
        dict: Dictionary containing anomaly information
    """
    # Here would be the implementation of anomaly detection
    # For brevity, we'll simulate a result

    result = {
        "start_date": start_date,
        "end_date": end_date,
        "total_points": 240,
        "anomaly_count": 3,
        "threshold": threshold,
        "anomalies": [
            {
                "timestamp": "2023-01-02 14:00:00",
                "value": 25.7,
                "deviation": 4.2
            },
            {
                "timestamp": "2023-01-03 08:30:00",
                "value": -5.3,
                "deviation": 6.8
            },
            {
                "timestamp": "2023-01-04 22:15:00",
                "value": 30.1,
                "deviation": 5.5
            }
        ]
    }

    return result

# Setup function calling with Ollama
def process_request_with_ollama():
    # Initialize chat history
    messages = [
        {
            "role": "system",
            "content": "You are an AI assistant that helps with time series anomaly detection."
        }
    ]

    print("Time Series Anomaly Detection System")
    print("Type 'exit' to quit the conversation.")

    while True:
        # Get user input
        user_input = input("\nYou: ")

        if user_input.lower() == 'exit':
            break

        # Add user message to chat history
        messages.append({"role": "user", "content": user_input})

        # Define the tool for anomaly detection
        tools = [
            {
                "type": "function",
                "function": {
                    "name": "detect_anomalies",
                    "description": "Detect anomalies in time series data within a specified date range",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "start_date": {
                                "type": "string",
                                "description": "Start date in YYYY-MM-DD format"
                            },
                            "end_date": {
                                "type": "string",
                                "description": "End date in YYYY-MM-DD format"
                            },
                            "threshold": {
                                "type": "number",
                                "description": "Threshold for anomaly detection (optional)"
                            }
                        },
                        "required": ["start_date", "end_date"]
                    }
                }
            }
        ]

        # Get response from Ollama
        response = ollama.chat(
            model="llama3.3",
            messages=messages,
            tools=tools
        )

        # Check if the model wants to call a function
        if 'tool_calls' in response['message']:
            tool_calls = response['message']['tool_calls']
            for tool_call in tool_calls:
                if tool_call['function']['name'] == 'detect_anomalies':
                    # Parse arguments
                    args = json.loads(tool_call['function']['arguments'])

                    # Call the function
                    threshold = args.get('threshold', 0.1)
                    result = detect_anomalies(args['start_date'], args['end_date'], threshold)

                    # Add function result to messages
                    messages.append({
                        "role": "tool",
                        "tool_call_id": tool_call['id'],
                        "name": "detect_anomalies",
                        "content": json.dumps(result)
                    })

            # Get final response after function call
            final_response = ollama.chat(
                model="llama3.3",
                messages=messages
            )

            # Print the response and add to messages
            print(f"\nAI: {final_response['message']['content']}")
            messages.append({"role": "assistant", "content": final_response['message']['content']})
        else:
            # Print the response and add to messages
            print(f"\nAI: {response['message']['content']}")
            messages.append({"role": "assistant", "content": response['message']['content']})

# Uncomment to run the interactive system
# process_request_with_ollama()

Creating an End-to-End Time Series Analysis Pipeline

Want to see how all these pieces fit together? I've built a comprehensive example combining everything we've learned. This system includes:

  • LSTM-based time series forecasting
  • Interactive visualization
  • LLM-powered analysis
  • Function calling for on-demand predictions
  • Anomaly detection capabilities

You can check out the full code in the GitHub repository for this article.

Best Practices for Your Integration Projects

Through experimenting with these integrations, I've discovered several best practices:

1. Clear Separation of Concerns

Let each system do what it does best. Use ML models for statistical analysis and numerical predictions, and LLMs for natural language understanding and user interaction.

2. Effective Data Flow Design

Design clear data flows between systems, ensuring data is properly formatted when moving between ML models and LLMs.

3. Thoughtful Prompt Engineering

Create clear, informative prompts when passing ML outputs to LLMs. The quality of your prompts directly impacts the quality of the LLM's response.

4. Function Calling for Complex Operations

Use function calling to enable LLMs to invoke ML capabilities on demand, particularly for complex operations like forecasting or anomaly detection.

5. User-Centered Design

Focus on creating a seamless experience. Your users shouldn't need to understand the complex integration happening behind the scenes.

Conclusion

By combining the computational precision of traditional ML with the natural language capabilities of LLMs, we can create systems that are both technically powerful and accessible. The examples provided in this guide demonstrate various integration strategies that you can adapt for your own projects.

As these technologies continue to evolve, we can expect to see increasingly seamless integration between ML systems and LLMs. This convergence will lead to more powerful and user-friendly AI applications across many domains.

I'd love to hear about your experiences integrating ML with LLMs. What strategies have worked for you? What challenges have you faced? Share your thoughts in the comments below.

Happy coding!

---

At CorticalFlow expanding the cognitive ability of the user is our mission.

Disclaimer

The provided code does not present a production ready setup in regards of security and stability. All code presented in this tutorial is used under your own risk. Consider always security audits before you put any code in production.

None of the parts of the tutorials or code content should be considered as financial advice. Always consult a professional investment Advisor before taking an investment.