← All Posts
ai-mlApril 27, 2026pythontensorflowpytorchscikit-learnmachine-learningdeep-learningenterprise-aiazuremlops

Python Runs AI — TensorFlow, PyTorch, and Scikit-learn Decoded for Enterprise Architects

Python is not just a language for AI — it is the operating system every framework, cloud SDK, and ML pipeline is built on top of, and choosing the wrong framework for the wrong problem costs teams months of rework.

Python is not just a language choice for AI and ML — it is the substrate every major framework, cloud SDK, and pipeline tool is built on, and the framework you pick determines your team's velocity, debugging surface, and production operational model.

The three frameworks that dominate enterprise AI work — Scikit-learn, TensorFlow, and PyTorch — are not competitors on the same dimension. They solve fundamentally different problems at different layers of the AI stack. Treating them as interchangeable is how teams end up with a 50-layer neural network solving a problem a logistic regression would have handled in 3 milliseconds.


Why Python Owns AI

Before the frameworks: why Python?

Python dominates AI because:

  • Ecosystem depth — every major framework ships Python bindings first
  • Interactive development — Jupyter notebooks make experimentation fast; mistakes are cheap
  • NumPy array protocol — a shared memory format that lets frameworks interoperate without copies
  • Azure, GCP, AWS SDKs — all Python-first for ML workloads
  • Hiring pool — data scientists, ML engineers, and backend engineers all speak Python

The cost: Python is slow for compute-intensive loops. Every serious framework solves this the same way — Python handles the graph definition and orchestration; C++/CUDA kernels run the actual math.


The Decision Framework: Which Tool for Which Job

Before writing a line of code, the right framework question is: what kind of problem is this?

Problem TypeRight ToolWrong Tool
Loan default prediction (tabular)Scikit-learn XGBoostPyTorch neural net
Image classification at scaleTensorFlow + TF ServingScikit-learn
LLM fine-tuningPyTorch + Hugging Face PEFTTensorFlow
Fraud detection (real-time, <5ms)Scikit-learn + ONNX exportTensorFlow Serving
Sentiment analysis (pre-trained)Hugging Face Transformers (PyTorch)Training from scratch
Time-series anomaly detectionScikit-learn IsolationForestDeep learning
Custom diffusion model researchPyTorchTensorFlow
Mobile / edge inferenceTensorFlow LitePyTorch (raw)

Scikit-learn — The Workhorse of Enterprise ML

When to use it: Tabular data, classical ML algorithms, explainability requirements, fast iteration, regulatory environments (SR 11-7, HIPAA) where model interpretability is mandatory.

Scikit-learn is not glamorous. It is the tool that solves 60% of real enterprise ML problems — and it solves them with a consistent API, deterministic behavior, and models that run in microseconds.

MortgageIQ: Loan Approval Risk Scoring

At MortgageIQ, the first version of the loan risk model was Scikit-learn XGBoost. It went to production in 3 weeks, scored 5,000+ loans per day at sub-millisecond latency, and passed the SR 11-7 model validation because we could produce SHAP explanations for every decision.

# Azure stack: train on Azure ML, export to Azure Container Apps
import pandas as pd
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.metrics import roc_auc_score, classification_report
import shap
import mlflow
import mlflow.sklearn

# Feature definitions — explicit for audit trail
NUMERIC_FEATURES = ['credit_score', 'dti_ratio', 'ltv_ratio', 'months_employed', 'loan_amount']
CATEGORICAL_FEATURES = ['loan_type', 'property_type', 'employment_status']
TARGET = 'default_flag'

def build_loan_risk_pipeline() -> Pipeline:
    numeric_transformer = Pipeline([
        ('scaler', StandardScaler()),
    ])
    categorical_transformer = Pipeline([
        ('encoder', OneHotEncoder(handle_unknown='ignore', sparse_output=False)),
    ])
    preprocessor = ColumnTransformer([
        ('num', numeric_transformer, NUMERIC_FEATURES),
        ('cat', categorical_transformer, CATEGORICAL_FEATURES),
    ])
    return Pipeline([
        ('preprocessor', preprocessor),
        ('classifier', GradientBoostingClassifier(
            n_estimators=200,
            learning_rate=0.05,
            max_depth=4,
            subsample=0.8,
            random_state=42,
        )),
    ])

def train_and_log(df: pd.DataFrame) -> None:
    X = df[NUMERIC_FEATURES + CATEGORICAL_FEATURES]
    y = df[TARGET]
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

    with mlflow.start_run():
        pipeline = build_loan_risk_pipeline()
        pipeline.fit(X_train, y_train)

        # Cross-validation AUC — required for SR 11-7 validation report
        cv_auc = cross_val_score(pipeline, X_train, y_train, cv=5, scoring='roc_auc').mean()
        test_auc = roc_auc_score(y_test, pipeline.predict_proba(X_test)[:, 1])

        mlflow.log_metric('cv_auc', cv_auc)
        mlflow.log_metric('test_auc', test_auc)
        mlflow.sklearn.log_model(pipeline, 'loan_risk_model')

        # SHAP explainability — required for regulatory review
        explainer = shap.TreeExplainer(pipeline.named_steps['classifier'])
        X_test_transformed = pipeline.named_steps['preprocessor'].transform(X_test)
        shap_values = explainer.shap_values(X_test_transformed)
        mlflow.log_dict({'mean_abs_shap': dict(zip(NUMERIC_FEATURES, np.abs(shap_values).mean(axis=0)[:len(NUMERIC_FEATURES)].tolist()))}, 'shap_summary.json')

        print(f"CV AUC: {cv_auc:.4f} | Test AUC: {test_auc:.4f}")
        print(classification_report(y_test, pipeline.predict(X_test)))

Open source equivalent — same code runs locally without Azure ML:

# Local training without Azure ML — identical model logic
import joblib

pipeline = build_loan_risk_pipeline()
pipeline.fit(X_train, y_train)

# Export to ONNX for sub-millisecond serving in any language
from skl2onnx import convert_sklearn
from skl2onnx.common.data_types import FloatTensorType

onnx_model = convert_sklearn(pipeline, 'loan_risk', [('input', FloatTensorType([None, len(NUMERIC_FEATURES)]))])
with open('loan_risk_model.onnx', 'wb') as f:
    f.write(onnx_model.SerializeToString())

# ONNX runtime inference: 0.3ms per prediction
import onnxruntime as rt
sess = rt.InferenceSession('loan_risk_model.onnx')
pred = sess.run(None, {'input': X_test[NUMERIC_FEATURES].values.astype(np.float32)})

Key Scikit-learn patterns for enterprise:

  • Pipeline — prevents data leakage between train/test splits; required for production
  • ColumnTransformer — handles mixed numeric/categorical without manual encoding
  • cross_val_score — SR 11-7 requires cross-validation, not just a single train/test split
  • SHAP + ONNX export — explainability + language-agnostic serving

TensorFlow — Production Deep Learning at Scale

When to use it: Computer vision, NLP at scale, mobile/edge deployment, serving millions of predictions per second. TensorFlow's graph execution and TF Serving are production-grade in a way that PyTorch only matched recently.

Real Use Case: Document Classification at Scale

At Domino's, we classified 10 million+ store incident reports per year into 40+ categories to route them to the right operations team. Scikit-learn TF-IDF + SVM was our v1 — 82% accuracy. TensorFlow BERT fine-tune was v2 — 94% accuracy, 40% fewer misrouted tickets, $1.2M annual ops cost reduction.

# Azure stack: Azure ML + TF Serving on AKS
import tensorflow as tf
from tensorflow.keras import layers, Model
from transformers import TFBertModel, BertTokenizer
import numpy as np

LABELS = ['equipment_failure', 'food_safety', 'delivery_delay', 'customer_complaint', 'staffing']
MAX_LEN = 128

class IncidentClassifier(Model):
    def __init__(self, num_labels: int):
        super().__init__()
        # BERT base — fine-tune only the top 2 encoder layers for speed
        self.bert = TFBertModel.from_pretrained('bert-base-uncased')
        for layer in self.bert.layers[:-2]:
            layer.trainable = False

        self.dropout = layers.Dropout(0.3)
        self.classifier = layers.Dense(num_labels, activation='softmax')

    def call(self, inputs, training=False):
        # inputs: dict with input_ids, attention_mask, token_type_ids
        bert_output = self.bert(inputs, training=training)
        # Use [CLS] token representation for classification
        pooled = bert_output.pooler_output
        pooled = self.dropout(pooled, training=training)
        return self.classifier(pooled)

def build_and_train(train_dataset: tf.data.Dataset, val_dataset: tf.data.Dataset) -> IncidentClassifier:
    model = IncidentClassifier(num_labels=len(LABELS))

    # Mixed precision — 2x throughput on Azure NC-series GPUs
    tf.keras.mixed_precision.set_global_policy('mixed_float16')

    model.compile(
        optimizer=tf.keras.optimizers.Adam(learning_rate=2e-5),
        loss='sparse_categorical_crossentropy',
        metrics=['accuracy'],
    )

    callbacks = [
        tf.keras.callbacks.EarlyStopping(patience=3, restore_best_weights=True),
        tf.keras.callbacks.ModelCheckpoint('best_model', save_best_only=True),
        # Azure ML metric logging
        tf.keras.callbacks.CSVLogger('training_log.csv'),
    ]

    model.fit(train_dataset, validation_data=val_dataset, epochs=10, callbacks=callbacks)
    return model

def export_for_serving(model: IncidentClassifier, export_path: str) -> None:
    # SavedModel format — directly loadable by TF Serving
    tf.saved_model.save(model, export_path)
    # TF Serving REST endpoint: POST /v1/models/incident_classifier:predict
    print(f"Model exported to {export_path}")
    print("Deploy with: docker run -p 8501:8501 -v {export_path}:/models/incident_classifier tensorflow/serving")

Open source serving — TF Serving Docker, no cloud required:

# Tokenize and predict against TF Serving REST API
import requests

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')

def classify_incident(text: str, serving_url: str = 'http://localhost:8501') -> dict:
    tokens = tokenizer(
        text,
        max_length=MAX_LEN,
        padding='max_length',
        truncation=True,
        return_tensors='np',
    )
    payload = {
        'instances': [{
            'input_ids': tokens['input_ids'][0].tolist(),
            'attention_mask': tokens['attention_mask'][0].tolist(),
            'token_type_ids': tokens['token_type_ids'][0].tolist(),
        }]
    }
    response = requests.post(f'{serving_url}/v1/models/incident_classifier:predict', json=payload)
    predictions = response.json()['predictions'][0]
    label = LABELS[np.argmax(predictions)]
    confidence = max(predictions)
    return {'label': label, 'confidence': confidence}

result = classify_incident("Oven temperature sensor failed during peak hours")
# {'label': 'equipment_failure', 'confidence': 0.97}

PyTorch — Research, LLMs, and Fine-Tuning

When to use it: Custom model architectures, LLM fine-tuning, Hugging Face ecosystem, research where you need to inspect gradients and intermediate activations. PyTorch's eager execution makes debugging fundamentally easier than TensorFlow's graph mode.

Real Use Case: Domain-Specific LLM Fine-Tuning for MortgageIQ

The SO agent at MortgageIQ needed to understand mortgage-specific terminology — "BILT boarding", "MSP reconciliation", "forbearance cure dates" — that general-purpose GPT models hallucinated on. We fine-tuned a 7B model using LoRA on PyTorch with 2 A100s in under 4 hours.

# Azure stack: Azure ML + NC A100 v4 compute + Azure Model Registry
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments, Trainer
from peft import LoraConfig, get_peft_model, TaskType
from datasets import Dataset
import mlflow

BASE_MODEL = 'mistralai/Mistral-7B-Instruct-v0.2'

def load_model_with_lora(base_model: str) -> tuple:
    tokenizer = AutoTokenizer.from_pretrained(base_model)
    tokenizer.pad_token = tokenizer.eos_token

    model = AutoModelForCausalLM.from_pretrained(
        base_model,
        torch_dtype=torch.bfloat16,  # bfloat16 stable on A100s
        device_map='auto',           # distributes across GPUs automatically
    )

    # LoRA: train 0.1% of parameters instead of 100%
    # Reduces VRAM from 56GB to 18GB; fine-tune a 7B model on a single A100
    lora_config = LoraConfig(
        task_type=TaskType.CAUSAL_LM,
        r=16,                    # rank — higher = more capacity, more VRAM
        lora_alpha=32,           # scaling factor
        target_modules=['q_proj', 'v_proj'],  # only attention projections
        lora_dropout=0.05,
        bias='none',
    )
    model = get_peft_model(model, lora_config)
    model.print_trainable_parameters()
    # trainable params: 13,631,488 || all params: 7,255,654,400 || trainable%: 0.19%

    return model, tokenizer

def format_mortgage_example(instruction: str, response: str) -> str:
    # Alpaca-style instruction format
    return f"### Instruction:\n{instruction}\n\n### Response:\n{response}<|endoftext|>"

def fine_tune(train_data: list[dict], output_dir: str) -> None:
    model, tokenizer = load_model_with_lora(BASE_MODEL)

    dataset = Dataset.from_list([
        {'text': format_mortgage_example(d['instruction'], d['response'])}
        for d in train_data
    ])

    def tokenize(batch):
        return tokenizer(batch['text'], truncation=True, max_length=512, padding='max_length')

    tokenized = dataset.map(tokenize, batched=True, remove_columns=['text'])

    training_args = TrainingArguments(
        output_dir=output_dir,
        num_train_epochs=3,
        per_device_train_batch_size=4,
        gradient_accumulation_steps=4,   # effective batch size = 16
        learning_rate=2e-4,
        bf16=True,
        logging_steps=10,
        save_strategy='epoch',
        report_to='mlflow',
    )

    with mlflow.start_run():
        trainer = Trainer(model=model, args=training_args, train_dataset=tokenized)
        trainer.train()
        model.save_pretrained(output_dir)
        tokenizer.save_pretrained(output_dir)
        mlflow.log_param('base_model', BASE_MODEL)
        mlflow.log_param('lora_rank', 16)

Open source inference — serve locally with vLLM:

# vLLM: 3-5x faster than naive HuggingFace generate()
# pip install vllm
from vllm import LLM, SamplingParams
from peft import PeftModel
from transformers import AutoModelForCausalLM

# Merge LoRA weights into base model for faster serving
base = AutoModelForCausalLM.from_pretrained(BASE_MODEL, torch_dtype=torch.bfloat16)
model = PeftModel.from_pretrained(base, './mortgage_lora')
merged = model.merge_and_unload()
merged.save_pretrained('./mortgage_merged')

# Serve with vLLM
llm = LLM(model='./mortgage_merged', dtype='bfloat16')
params = SamplingParams(temperature=0.1, max_tokens=512)

outputs = llm.generate([
    "### Instruction:\nWhat is the MSP cure date for a forbearance exit?\n\n### Response:\n"
], params)
print(outputs[0].outputs[0].text)

PyTorch for Computer Vision — Custom CNN Architecture

# Open source: PyTorch + torchvision
# Azure equivalent: same code, run on Azure ML NC compute
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms, models
from torch.utils.data import DataLoader

def build_document_scanner(num_classes: int) -> nn.Module:
    # Transfer learning: EfficientNet-B0 pretrained on ImageNet
    model = models.efficientnet_b0(weights=models.EfficientNet_B0_Weights.IMAGENET1K_V1)

    # Freeze all layers except the classifier head
    for param in model.features.parameters():
        param.requires_grad = False

    # Replace classifier for our document categories
    in_features = model.classifier[1].in_features
    model.classifier = nn.Sequential(
        nn.Dropout(p=0.3),
        nn.Linear(in_features, num_classes),
    )
    return model

def train_epoch(model: nn.Module, loader: DataLoader, optimizer: optim.Optimizer, device: str) -> float:
    model.train()
    criterion = nn.CrossEntropyLoss()
    total_loss = 0.0

    for images, labels in loader:
        images, labels = images.to(device), labels.to(device)
        optimizer.zero_grad()
        outputs = model(images)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        total_loss += loss.item()

    return total_loss / len(loader)

# Training loop with learning rate scheduling
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = build_document_scanner(num_classes=10).to(device)
optimizer = optim.AdamW(model.classifier.parameters(), lr=1e-3, weight_decay=1e-4)
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=20)

# Export to TorchScript for production — typed, no Python required at runtime
scripted = torch.jit.script(model.eval())
scripted.save('document_scanner.pt')

The Full AI Architecture: All Three Frameworks Working Together

In the MortgageIQ platform:

  • Scikit-learn scores every loan in <1ms via ONNX runtime — 5,000 loans/day, zero GPU required
  • TensorFlow BERT classifies uploaded documents (W-2, bank statements, tax returns) — 94% accuracy
  • PyTorch LoRA fine-tune powers the SO agent's mortgage-domain knowledge — deployed via Azure AI Foundry

Azure vs Open Source: Side-by-Side

CapabilityAzure StackOpen Source
Training orchestrationAzure ML PipelinesMLflow + DVC
Experiment trackingAzure ML StudioMLflow Tracking
Model registryAzure ML Model RegistryMLflow Registry
Serving (classical)Azure Container Apps + ONNXFastAPI + ONNX Runtime
Serving (deep learning)Azure Kubernetes Service + TF ServingDocker + TF Serving
Serving (LLMs)Azure AI Foundry / AOAIvLLM + Ollama
Feature storeAzure ML Feature StoreFeast
MonitoringAzure Monitor + Application InsightsPrometheus + Grafana + EvidentlyAI
GPU computeNC A100 v4 (A100 80GB) ⚠️ ~$3.40/hrLambda Labs / RunPod ~$1.99/hr

What Fails in Enterprise Python AI

1. Scikit-learn in production without a pipeline Fitting the scaler on test data. Every enterprise ML bug I have seen traces back to a StandardScaler that was .fit() on the full dataset before the train/test split. Always Pipeline.

2. PyTorch in production without TorchScript or ONNX export Raw PyTorch models depend on the Python runtime. A torch.jit.script() export removes that dependency — the model runs in C++ with no GIL, no Python overhead, deterministic behavior.

3. TensorFlow graph vs eager mode confusion @tf.function compiles to a graph — fast, but debugging is opaque. Remove it during development; add it back for production serving. Teams that forget this ship models that are 10x slower than they need to be.

4. Mixed framework debt Training in PyTorch, serving from a TensorFlow SavedModel, pre-processing in Scikit-learn — each with its own numpy version requirement. Solve this with ONNX as the common export format or containerize each model independently.

5. No versioning discipline model_v2_final_FINAL.pkl is not a model registry. MLflow or Azure ML Model Registry with semantic versions + promotion stages (Staging → Production) is the minimum bar.


Key Takeaways

  • Scikit-learn solves 60% of enterprise ML problems — tabular data, fast iteration, regulatory explainability. Start here before reaching for deep learning
  • TensorFlow owns production serving at scale — TF Serving handles 10,000+ RPS, TFLite runs on edge devices, TF.js runs in browsers
  • PyTorch dominates LLM fine-tuning and research — the Hugging Face ecosystem is PyTorch-first; LoRA lets you fine-tune 7B models on a single A100
  • ONNX is the bridge — export any model (Scikit-learn, PyTorch, TensorFlow) to ONNX for universal sub-millisecond serving in any language
  • Always use Pipeline in Scikit-learn — it is not optional; it is the difference between a model that works in notebooks and one that works in production
  • Azure ML is orchestration, not the model — the same PyTorch/TF code runs on Azure ML or your laptop; Azure adds tracking, scheduling, and governance
  • Pick the tool by problem type, not by team preference — using a 7B parameter LLM where XGBoost works is not sophistication; it is waste