Rate Limiting
Rate limiting helps protect your API from abuse by restricting the number of requests a client can make within a specified time window. This prevents overuse of resources, ensures fair access for all users, and safeguards the system from overload or malicious attacks, such as brute-force attempts.
Features
- Flexible rate limits tailored to different criteria
- Customizable time windows for request tracking
- Multiple rate limit tiers to accommodate varying usage levels
- Real-time analytics and monitoring for tracking request patterns
Types of Rate Limiting Algorithms
There are several rate limiting algorithms available, each suited for different use cases. Here are the most common ones:
Fixed Window
The Fixed Window algorithm divides time into fixed intervals (e.g., every 1 minute) and counts the number of requests a client can make within that interval. If the number of requests exceeds the limit, the client is blocked until the next interval begins.
Configuration
{
"limit": 100, // Number of requests allowed per window
"windowMs": "1h", // Time window (e.g., 1h, 1d)
}When to Use
Use Fixed Window for simple rate limiting where a strict reset period is acceptable. It works well for preventing abuse or DDoS attacks but may cause slight inconsistencies around the reset boundary.
Downsides
- Inconsistent request limits at the boundary of the time window (e.g., 100 requests in the last minute and 100 in the next minute).
- High burst traffic could exceed the limit at the reset time.
Example Usage
import ZSecure from 'z-secure-service';
// Initialize the ZSecure rate limiter with the provided configuration
const rate = ZSecure({
API_KEY: "YOUR_API_KEY", // Your API key for authentication
ZSECURE_URL: "YOUR_ZSECURE_URL", // Base URL for the ZSecure service
rateLimitingRule : {
rule : {
algorithm : "FixedWindowRule", // Use the Fixed Window algorithm for rate limiting
limit: 5, // Maximum number of requests allowed within the window
windowMs : 60000, // Time window in milliseconds (60 seconds)
}
},
});
// Define a route for the root URL
app.get('/', async (req, res) => {
// Protect the route using the rate limiter
const result = await rate.protect(req, userId, 1);
// Check if the request is denied
if(!result.isdenied){
res.send('Hello, World!'); // Send a response if the request is allowed
}
else{
res.send(result.message); // Send the denial message if the request is denied
}
});
Token Bucket
The Token Bucket algorithm allows bursts of requests by storing tokens in a bucket. Tokens are added at a constant rate, and each request consumes one token. If the bucket runs out of tokens, requests are either blocked or throttled until more tokens are available.
Configuration
{
"capacity": 5, // Maximum number of requests that can be held in the bucket
"leakRate": 1, // Number of requests that can leak out of the bucket per interval
"Timeouts": 1000, // Interval in milliseconds for leaking requests
}When to Use
Token Bucket is ideal for applications where short bursts of traffic are allowed but should be limited over time. It's useful for APIs where requests can be rate-limited but still need to handle sudden spikes efficiently.
Downsides
- Potential for token exhaustion if a client sends too many requests in a short period.
- Complexity in implementation compared to Fixed Window.
Example Usage
import ZSecure from 'z-secure-service';
// Initialize the ZSecure rate limiter with the provided configuration
const rate = ZSecure({
API_KEY: "YOUR_API_KEY", // Your API key for authentication
baseUrl: "YOUR_BASE_URL", // Base URL for the ZSecure service
rateLimitingRule : {
rule : {
algorithm : "TokenBucketRule", // Use the Token Bucket algorithm for rate limiting
// capacity: 5, // Maximum number of tokens in the bucket
// refillRate: 1, // Number of tokens added to the bucket per interval
// intervalMs: 1000, // Interval in milliseconds for adding tokens
}
},
});
// Define a route for the root URL
app.get('/', async (req, res) => {
// Protect the route using the rate limiter
const result = await rate.protect(req, userId, 1);
// Check if the request is denied
if(!result.isdenied){
res.send('Hello, World!'); // Send a response if the request is allowed
}
else{
res.send(result.message); // Send the denial message if the request is denied
}
});
Leaky Bucket
The Leaky Bucket algorithm processes requests at a fixed rate, allowing requests to "leak" out of the bucket at a constant rate. If the bucket overflows (too many requests in a short time), the excess requests are discarded or delayed.
Configuration
{
"capacity": 5, // Maximum number of requests that can be held in the bucket
"leakRate": 1, // Number of requests that can leak out of the bucket per interval
"timeout": 1000, // Interval in milliseconds for leaking requests
}When to Use
Leaky Bucket is useful for APIs that require consistent request processing without sudden spikes. It helps ensure that traffic is handled at a smooth, constant rate, even if the request rate fluctuates.
Downsides
- Excess requests may be discarded or delayed, affecting user experience.
- Does not allow bursts of traffic even if the system can handle it.
Example Usage
import ZSecure from 'z-secure-service';
// Initialize the ZSecure rate limiter with the provided configuration
const rate = ZSecure({
API_KEY: "YOUR_API_KEY", // Your API key for authentication
baseUrl: "YOUR_BASE_URL", // Base URL for the ZSecure service
rateLimitingRule : {
rule : {
algorithm : "LeakyBucketRule", // Use the Leaky Bucket algorithm for rate limiting
capacity: 5, // Maximum number of requests that can be held in the bucket
leakRate: 1, // Number of requests that can leak out of the bucket per interval
timeout: 1000, // Interval in milliseconds for leaking requests
}
},
});
// Define a route for the root URL
app.get('/', async (req, res) => {
// Protect the route using the rate limiter
const result = await rate.protect(req, userId, 1);
// Check if the request is denied
if(!result.isdenied){
res.send('Hello, World!'); // Send a response if the request is allowed
}
else{
res.send(result.message); // Send the denial message if the request is denied
}
});
Combined Rate Limiting and Shield
Combining rate limiting and shield rules provides an additional layer of protection for your API. Rate limiting controls the number of requests a client can make within a specified time window, while shield rules add further security measures, such as blocking or throttling requests based on specific criteria.
Example Usage
import ZSecure from 'z-secure-service';
// Initialize the ZSecure rate limiter with the provided configuration
const rate = ZSecure({
API_KEY: "YOUR_API_KEY", // Your API key for authentication
baseUrl: "YOUR_BASE_URL", // Base URL for the ZSecure service
rateLimitingRule : {
rule : {
algorithm : "FixedWindowRule", // Use the Fixed Window algorithm for rate limiting
limit: 5, // Maximum number of requests allowed within the window
windowMs : 60000, // Time window in milliseconds (60 seconds)
}
},
shieldRule: {
limit: 5, // Maximum number of requests allowed within the shield window
threshold: 5, // Threshold for triggering the shield rule
windowMs: 60000 // Time window in milliseconds (60 seconds)
}
});
// Define a route for the root URL
app.get('/', async (req, res) => {
// Protect the route using the rate limiter
const result = await rate.protect(req, userId, 1);
// Check if the request is denied
if(!result.isdenied){
res.send('Hello, World!'); // Send a response if the request is allowed
}
else{
res.send(result.message); // Send the denial message if the request is denied
}
});