The Engineering Reality of Monitoring Real-Time Conversations
Explore the technical challenges of building real-time conversation monitoring systems, from handling massive concurrency to integrating AI for instant analysis.
Read more →Website performance directly impacts user experience, conversion rates, and search engine rankings. A one-second delay in page load time can reduce conversions by 7%, and 53% of mobile users abandon sites that take longer than 3 seconds to load.
This comprehensive guide covers proven strategies to optimize web application performance from frontend to backend.
Google’s Core Web Vitals measure real user experience:
Largest Contentful Paint (LCP): Loading performance
First Input Delay (FID) / Interaction to Next Paint (INP): Interactivity
Cumulative Layout Shift (CLS): Visual stability
Images typically account for 50-90% of page weight.
Image Formats:
JPEG: Photos, complex images
WebP: Modern format, 25-35% smaller than JPEG
AVIF: Next-gen format, even smaller (when supported)
SVG: Icons, logos, simple graphics
Implementation:
<!-- Responsive images with WebP -->
<picture>
<source srcset="hero.avif" type="image/avif">
<source srcset="hero.webp" type="image/webp">
<img src="hero.jpg" alt="Hero image"
loading="lazy"
width="1200"
height="600">
</picture>
<!-- Lazy loading -->
<img src="photo.jpg" loading="lazy" alt="Photo">
Best Practices:
srcsetJavaScript is the #1 performance bottleneck for most sites.
Code Splitting:
// React lazy loading
const Dashboard = lazy(() => import('./Dashboard'));
const Profile = lazy(() => import('./Profile'));
function App() {
return (
<Suspense fallback={<Loading />}>
<Routes>
<Route path="/dashboard" element={<Dashboard />} />
<Route path="/profile" element={<Profile />} />
</Routes>
</Suspense>
);
}
Tree Shaking:
// Good - Import only what you need
import { debounce } from 'lodash-es';
// Bad - Imports entire library
import _ from 'lodash';
Bundle Optimization:
// webpack.config.js
module.exports = {
optimization: {
splitChunks: {
chunks: 'all',
cacheGroups: {
vendor: {
test: /[\\/]node_modules[\\/]/,
name: 'vendors',
priority: 10,
},
},
},
},
};
Best Practices:
Critical CSS:
<!-- Inline critical CSS -->
<style>
/* Above-fold styles only */
.header { /* ... */ }
.hero { /* ... */ }
</style>
<!-- Load full CSS asynchronously -->
<link rel="preload" href="styles.css" as="style"
onload="this.onload=null;this.rel='stylesheet'">
CSS Optimization:
/* Use CSS containment */
.card {
contain: layout style paint;
}
/* Optimize animations */
.animated {
will-change: transform;
transform: translateX(0);
transition: transform 0.3s;
}
Best Practices:
Resource Hints:
<!-- DNS prefetch for external domains -->
<link rel="dns-prefetch" href="https://api.example.com">
<!-- Preconnect to critical origins -->
<link rel="preconnect" href="https://fonts.googleapis.com">
<!-- Preload critical resources -->
<link rel="preload" href="font.woff2" as="font" crossorigin>
<!-- Prefetch next-page resources -->
<link rel="prefetch" href="/next-page.html">
Loading Strategies:
// Intersection Observer for lazy loading
const observer = new IntersectionObserver((entries) => {
entries.forEach(entry => {
if (entry.isIntersecting) {
const img = entry.target;
img.src = img.dataset.src;
observer.unobserve(img);
}
});
});
document.querySelectorAll('img[data-src]').forEach(img => {
observer.observe(img);
});
Font Loading Strategy:
/* Font display strategy */
@font-face {
font-family: 'CustomFont';
src: url('font.woff2') format('woff2');
font-display: swap; /* Show fallback, then custom font */
}
Preload Critical Fonts:
<link rel="preload" href="font.woff2" as="font"
type="font/woff2" crossorigin>
Best Practices:
font-display: swapCaching Strategy:
// service-worker.js
const CACHE_NAME = 'v1';
// Cache-first strategy for static assets
self.addEventListener('fetch', (event) => {
if (event.request.destination === 'image') {
event.respondWith(
caches.match(event.request).then((response) => {
return response || fetch(event.request).then((response) => {
return caches.open(CACHE_NAME).then((cache) => {
cache.put(event.request, response.clone());
return response;
});
});
})
);
}
});
Benefits:
Query Optimization:
-- Bad - N+1 query problem
SELECT * FROM users;
-- Then for each user:
SELECT * FROM posts WHERE user_id = ?;
-- Good - Join with single query
SELECT users.*, posts.*
FROM users
LEFT JOIN posts ON posts.user_id = users.id;
Indexing:
-- Create indexes for frequently queried columns
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_posts_user_id ON posts(user_id);
CREATE INDEX idx_posts_created_at ON posts(created_at DESC);
-- Composite index for multiple columns
CREATE INDEX idx_posts_user_status ON posts(user_id, status);
Connection Pooling:
# Python with SQLAlchemy
from sqlalchemy import create_engine
from sqlalchemy.pool import QueuePool
engine = create_engine(
'postgresql://user:pass@localhost/db',
poolclass=QueuePool,
pool_size=20,
max_overflow=10,
pool_pre_ping=True # Verify connections before use
)
Best Practices:
Caching Layers:
HTTP Caching (Browser/CDN):
from fastapi import FastAPI
from fastapi.responses import Response
@app.get("/api/data")
async def get_data():
data = fetch_data()
headers = {
"Cache-Control": "public, max-age=3600",
"ETag": generate_etag(data)
}
return Response(content=data, headers=headers)
Application Caching (Redis):
import redis
import json
redis_client = redis.Redis(host='localhost', port=6379, db=0)
def get_user(user_id: int):
# Check cache first
cache_key = f"user:{user_id}"
cached = redis_client.get(cache_key)
if cached:
return json.loads(cached)
# Cache miss - fetch from database
user = db.query(User).filter(User.id == user_id).first()
# Store in cache for 1 hour
redis_client.setex(
cache_key,
3600,
json.dumps(user.dict())
)
return user
CDN Caching:
// Cloudflare Workers example
addEventListener('fetch', event => {
event.respondWith(handleRequest(event.request))
})
async function handleRequest(request) {
const cache = caches.default
const cacheKey = new Request(request.url, request)
// Check cache
let response = await cache.match(cacheKey)
if (!response) {
// Cache miss - fetch from origin
response = await fetch(request)
// Cache for 1 hour
response = new Response(response.body, response)
response.headers.set('Cache-Control', 'max-age=3600')
await cache.put(cacheKey, response.clone())
}
return response
}
Caching Strategies:
Response Compression:
from fastapi import FastAPI
from fastapi.middleware.gzip import GZipMiddleware
app = FastAPI()
app.add_middleware(GZIPMiddleware, minimum_size=1000)
Pagination:
@app.get("/api/posts")
async def get_posts(page: int = 1, per_page: int = 20):
# Efficient cursor-based pagination
query = db.query(Post).order_by(Post.id)
if page > 1:
last_id = get_last_id_from_previous_page(page, per_page)
query = query.filter(Post.id > last_id)
posts = query.limit(per_page).all()
return {
"posts": posts,
"page": page,
"per_page": per_page,
"has_more": len(posts) == per_page
}
Rate Limiting:
from slowapi import Limiter, _rate_limit_exceeded_handler
from slowapi.util import get_remote_address
limiter = Limiter(key_func=get_remote_address)
app.state.limiter = limiter
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
@app.get("/api/search")
@limiter.limit("10/minute")
async def search(request: Request, query: str):
return perform_search(query)
Background Jobs:
from celery import Celery
app = Celery('tasks', broker='redis://localhost:6379')
@app.task
def send_email(user_id: int):
# Heavy operation runs asynchronously
user = get_user(user_id)
send_welcome_email(user.email)
# Trigger from API endpoint
@api.post("/register")
async def register(user_data: UserCreate):
user = create_user(user_data)
# Send email asynchronously
send_email.delay(user.id)
return {"status": "success", "user_id": user.id}
Benefits:
Horizontal Scaling:
# nginx load balancer
upstream backend {
least_conn; # Route to server with fewest connections
server backend1.example.com weight=3;
server backend2.example.com weight=2;
server backend3.example.com;
# Health checks
server backend4.example.com backup;
}
server {
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Load Balancing Algorithms:
Track actual user experiences:
// Web Vitals tracking
import {getCLS, getFID, getFCP, getLCP, getTTFB} from 'web-vitals';
function sendToAnalytics(metric) {
fetch('/analytics', {
method: 'POST',
body: JSON.stringify(metric),
});
}
getCLS(sendToAnalytics);
getFID(sendToAnalytics);
getFCP(sendToAnalytics);
getLCP(sendToAnalytics);
getTTFB(sendToAnalytics);
Automated performance testing:
Lighthouse CI:
# .github/workflows/lighthouse.yml
name: Lighthouse CI
on: [push]
jobs:
lighthouse:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run Lighthouse
uses: treosh/lighthouse-ci-action@v9
with:
urls: |
https://example.com
https://example.com/products
uploadArtifacts: true
Backend Monitoring:
# Using OpenTelemetry
from opentelemetry import trace
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
tracer = trace.get_tracer(__name__)
@app.get("/api/slow-endpoint")
async def slow_endpoint():
with tracer.start_as_current_span("database-query"):
data = await slow_database_query()
with tracer.start_as_current_span("process-data"):
result = process_data(data)
return result
# Instrument FastAPI
FastAPIInstrumentor.instrument_app(app)
Popular APM Tools:
Set performance budgets to maintain standards:
// lighthouse-budget.json
[
{
"path": "/*",
"timings": [
{
"metric": "first-contentful-paint",
"budget": 2000
},
{
"metric": "largest-contentful-paint",
"budget": 2500
},
{
"metric": "interactive",
"budget": 3500
}
],
"resourceSizes": [
{
"resourceType": "script",
"budget": 300
},
{
"resourceType": "image",
"budget": 500
},
{
"resourceType": "total",
"budget": 1000
}
]
}
]
Web performance optimization requires a holistic approach:
Key principles:
Remember: Every 100ms improvement in load time can increase conversion rates by 1%.
At Async Squad Labs, we specialize in performance optimization across the stack. From frontend bundle optimization to backend scaling strategies, we help companies deliver fast, responsive applications that users love.
Need help optimizing your application? Contact us to discuss your performance challenges.
More guides: AI Integration | Python Testing | Go Microservices