Optimizing Audit Log Updates with Asynchronous Processing
Enhancing darwin.Cloud Performance by updating Audit logs using KAFKA and SQL Replicas
How We’ve Enhanced Audit Log Management for Better Performance
At AccountTECH, we understand the critical role of audit logs in ensuring transparency, compliance, and accountability. However, managing these logs efficiently without impacting real-time operations is a challenge. That’s why we’ve redesigned how our system handles audit logs to strike the perfect balance between reliability and performance.
The Role of Audit Logs
Audit logs are indispensable for tracking every action within our system, especially for financial and sales transactions. They provide:
- A complete history of who did what, when, and how.
- Crucial data for compliance and transaction reporting.
- Insights for post-event analysis and troubleshooting.
However, while audit logs are essential, they are not time-sensitive. They are used for research and reporting after the fact, making real-time updates unnecessary.
The Challenge
Previously, audit logs were updated directly on the production server, leading to:
- Increased workload on the production server.
- Slower performance for critical user tasks like viewing and inputting data.
- A missed opportunity to optimize non-time-sensitive processes.
The Solution: Asynchronous Audit Log Processing
We’ve implemented a new approach that offloads audit log updates to a separate server, leveraging modern tools like Kafka and an Always On SQL Replica. Here’s how it works:
- Event Generation:
- Whenever data is modified (INSERT, UPDATE, DELETE), an event is generated on the production server and sent to a Kafka topic. Kafka is a programming tool that enables running processes in a separate thread in the background, while the darwin.Cloud user can continue with their work.
- Kafka for Asynchronous Processing:
- Kafka queues these events and introduces a controlled delay, ensuring the SQL Replica has time to synchronize with the production database.
- Leveraging the SQL Replica:
- The delayed Kafka consumer reads the necessary data from the SQL Replica, minimizing the load on the production server. Since the SQL Replica is an exact duplicate of the live production data, the Audit logs can be updated with the information in the Replica instead of having to read the production server.
- Dedicated Audit Log Server:
- The processed audit log entries are stored on a separate server with high-performance NVMe drives, ensuring fast and reliable storage.
Benefits of the New System
- Faster Production Performance:
- By offloading audit log updates to a separate processing server, storing the Audit Log data on separate drives, and reading data from the SQL Replica, the production server is free to handle real-time user operations without delay.
- Efficient Resource Utilization:
- The SQL Replica handles read-intensive operations, while the dedicated audit log processing server manages write-heavy operations.
- Scalability:
- Kafka’s distributed architecture ensures that the system can handle high volumes of audit log events.
- Timely and Reliable Audit Logs:
- While not real-time, the delayed updates ensure accurate and complete audit logs for research and reporting.
Why This Matters
This new approach allows us to deliver:
- Uncompromised performance for day-to-day operations.
- Reliable and detailed audit logs for compliance and accountability.
- Scalable infrastructure that can grow with your business needs.
At AccountTECH, we’re committed to combining cutting-edge technology with thoughtful design to ensure that darwin.Cloud is both efficient and reliable. By rethinking how we manage audit logs, we’ve created a solution that works for today and scales for tomorrow.