The Engineering Reality of Monitoring Real-Time Conversations
Explore the technical challenges of building real-time conversation monitoring systems, from handling massive concurrency to integrating AI for instant analysis.
Read more →Go (Golang) has become the language of choice for building microservices at companies like Google, Uber, Netflix, and Dropbox. Its simplicity, performance, and excellent concurrency support make it ideal for distributed systems.
In this comprehensive guide, we’ll explore proven patterns and best practices for building production-ready microservices with Go.
Go offers compelling performance advantages:
Go’s design principles enhance development speed:
Running Go microservices in production is straightforward:
Structure your microservice in layers with clear boundaries:
cmd/
server/
main.go # Application entry point
internal/
domain/
user.go # Business entities
repository.go # Repository interfaces
usecase/
user_service.go # Business logic
repository/
postgres/
user_repo.go # PostgreSQL implementation
redis/
cache_repo.go # Redis cache implementation
handler/
http/
user_handler.go # HTTP handlers
middleware.go # HTTP middleware
config/
config.go # Configuration management
pkg/
logger/
logger.go # Logging utilities
Benefits:
Use an API Gateway as the single entry point for clients:
Client → API Gateway → [Auth Service, User Service, Order Service, ...]
Responsibilities:
Popular Tools:
Implement cross-cutting concerns at the infrastructure layer:
Service Mesh Responsibilities:
Popular Options:
Never hardcode configuration. Use environment-based config:
package config
import (
"github.com/kelseyhightower/envconfig"
)
type Config struct {
ServerPort int `envconfig:"SERVER_PORT" default:"8080"`
DatabaseURL string `envconfig:"DATABASE_URL" required:"true"`
RedisURL string `envconfig:"REDIS_URL" required:"true"`
LogLevel string `envconfig:"LOG_LEVEL" default:"info"`
JWTSecret string `envconfig:"JWT_SECRET" required:"true"`
// External service URLs
AuthServiceURL string `envconfig:"AUTH_SERVICE_URL"`
// Performance tuning
MaxConnections int `envconfig:"MAX_CONNECTIONS" default:"100"`
ConnectionTimeout int `envconfig:"CONNECTION_TIMEOUT" default:"30"`
}
func Load() (*Config, error) {
var cfg Config
err := envconfig.Process("", &cfg)
if err != nil {
return nil, err
}
return &cfg, nil
}
Key Principles:
Implement structured logging for better observability:
package logger
import (
"go.uber.org/zap"
)
var Log *zap.Logger
func Init(level string) error {
config := zap.NewProductionConfig()
config.Level = zap.NewAtomicLevelAt(parseLevel(level))
var err error
Log, err = config.Build(
zap.AddCaller(),
zap.AddCallerSkip(1),
)
return err
}
// Structured logging with context
func LogRequest(method, path string, duration int64, status int) {
Log.Info("http_request",
zap.String("method", method),
zap.String("path", path),
zap.Int64("duration_ms", duration),
zap.Int("status", status),
)
}
func LogError(msg string, err error, fields ...zap.Field) {
allFields := append(fields, zap.Error(err))
Log.Error(msg, allFields...)
}
Benefits:
Handle shutdown signals properly to avoid losing in-flight requests:
func main() {
srv := &http.Server{
Addr: ":8080",
Handler: router,
}
// Start server in goroutine
go func() {
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
logger.Log.Fatal("Server failed", zap.Error(err))
}
}()
// Wait for interrupt signal
quit := make(chan os.Signal, 1)
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
<-quit
logger.Log.Info("Shutting down server...")
// Graceful shutdown with timeout
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
if err := srv.Shutdown(ctx); err != nil {
logger.Log.Fatal("Server forced to shutdown", zap.Error(err))
}
logger.Log.Info("Server exited")
}
Implement comprehensive health checks for orchestration platforms:
func healthHandler(deps *Dependencies) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
health := map[string]string{
"status": "healthy",
}
// Check database connection
if err := deps.DB.Ping(); err != nil {
health["database"] = "unhealthy"
health["status"] = "unhealthy"
w.WriteHeader(http.StatusServiceUnavailable)
} else {
health["database"] = "healthy"
}
// Check Redis connection
if err := deps.Redis.Ping().Err(); err != nil {
health["redis"] = "unhealthy"
health["status"] = "unhealthy"
w.WriteHeader(http.StatusServiceUnavailable)
} else {
health["redis"] = "healthy"
}
json.NewEncoder(w).Encode(health)
}
}
Kubernetes Probes:
/health/live - Is the service running?/health/ready - Can the service handle traffic?/health/startup - Has the service finished initialization?Prevent cascade failures with circuit breakers:
import "github.com/sony/gobreaker"
var circuitBreaker *gobreaker.CircuitBreaker
func init() {
settings := gobreaker.Settings{
Name: "external-api",
MaxRequests: 3,
Interval: 60 * time.Second,
Timeout: 10 * time.Second,
ReadyToTrip: func(counts gobreaker.Counts) bool {
failureRatio := float64(counts.TotalFailures) / float64(counts.Requests)
return counts.Requests >= 3 && failureRatio >= 0.6
},
}
circuitBreaker = gobreaker.NewCircuitBreaker(settings)
}
func callExternalService(ctx context.Context, req *Request) (*Response, error) {
result, err := circuitBreaker.Execute(func() (interface{}, error) {
return makeHTTPRequest(ctx, req)
})
if err != nil {
return nil, err
}
return result.(*Response), nil
}
States:
Always use context for timeout and cancellation:
func (s *UserService) GetUser(ctx context.Context, userID string) (*User, error) {
// Add timeout to context if not already set
ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
// Database query respects context timeout
user := &User{}
err := s.db.GetContext(ctx, user, "SELECT * FROM users WHERE id = $1", userID)
if err != nil {
return nil, err
}
// External API call with same context
profile, err := s.fetchUserProfile(ctx, userID)
if err != nil {
logger.LogError("Failed to fetch profile", err, zap.String("user_id", userID))
// Decision: return partial data or error?
return user, nil // Graceful degradation
}
user.Profile = profile
return user, nil
}
Protect your service from overload:
import "golang.org/x/time/rate"
type RateLimitMiddleware struct {
limiter *rate.Limiter
}
func NewRateLimitMiddleware(rps int) *RateLimitMiddleware {
return &RateLimitMiddleware{
limiter: rate.NewLimiter(rate.Limit(rps), rps*2),
}
}
func (m *RateLimitMiddleware) Limit(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if !m.limiter.Allow() {
http.Error(w, "Rate limit exceeded", http.StatusTooManyRequests)
return
}
next.ServeHTTP(w, r)
})
}
When to Use:
HTTP with JSON:
// Simple, universal, easy to debug
// Best for external APIs and moderate performance needs
gRPC with Protocol Buffers:
// 5-10x faster than JSON
// Strong typing with code generation
// Best for internal service communication
When to Use:
Popular Options:
Test business logic in isolation:
func TestUserService_CreateUser(t *testing.T) {
mockRepo := &mockUserRepository{}
service := NewUserService(mockRepo)
user := &User{
Email: "test@example.com",
Name: "Test User",
}
mockRepo.On("Create", mock.Anything, user).Return(nil)
err := service.CreateUser(context.Background(), user)
assert.NoError(t, err)
mockRepo.AssertExpectations(t)
}
Test service interactions:
func TestUserAPI_Integration(t *testing.T) {
// Start test database
db := setupTestDB(t)
defer db.Close()
// Start test server
srv := startTestServer(t, db)
defer srv.Close()
// Test create user endpoint
resp, err := http.Post(
srv.URL+"/users",
"application/json",
strings.NewReader(`{"email":"test@example.com"}`),
)
require.NoError(t, err)
assert.Equal(t, http.StatusCreated, resp.StatusCode)
}
Verify performance under load:
# Using vegeta
echo "GET http://localhost:8080/users/123" | \
vegeta attack -duration=30s -rate=1000 | \
vegeta report
Expose key metrics:
import "github.com/prometheus/client_golang/prometheus"
var (
requestDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "http_request_duration_seconds",
Help: "HTTP request duration in seconds",
},
[]string{"method", "path", "status"},
)
)
func init() {
prometheus.MustRegister(requestDuration)
}
func metricsMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
start := time.Now()
ww := &responseWriter{ResponseWriter: w}
next.ServeHTTP(ww, r)
duration := time.Since(start).Seconds()
requestDuration.WithLabelValues(
r.Method,
r.URL.Path,
strconv.Itoa(ww.status),
).Observe(duration)
})
}
Track requests across services:
import "go.opentelemetry.io/otel"
func handleRequest(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
tracer := otel.Tracer("user-service")
ctx, span := tracer.Start(ctx, "handle-request")
defer span.End()
// Business logic with traced context
user, err := getUserFromDB(ctx, userID)
if err != nil {
span.RecordError(err)
span.SetStatus(codes.Error, err.Error())
return
}
json.NewEncoder(w).Encode(user)
}
Create optimized Docker images:
# Multi-stage build
FROM golang:1.21-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o server ./cmd/server
# Final minimal image
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /app/server .
EXPOSE 8080
CMD ["./server"]
Deploy with proper resource limits:
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
spec:
replicas: 3
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
containers:
- name: user-service
image: user-service:latest
ports:
- containerPort: 8080
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health/live
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /health/ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
Building production-ready microservices with Go requires attention to:
Go’s simplicity and performance make it an excellent choice for microservices, but success requires following proven patterns and best practices.
At Async Squad Labs, we specialize in building scalable microservices architectures with Go. Whether you’re starting a new project or migrating from a monolith, we can help you design and implement a robust microservices platform.
Ready to build high-performance microservices? Contact us to discuss your project.
Explore more: AI Integration Guide | Elixir Benefits | Our Services