“Data older than seven days was only used for reporting, and it didn’t require high performance,” House says. ![]() “We needed to keep 90 days of data, but 30 days was the maximum we would have been able to do with the cost,” says Bernstein.Īs Zendesk began exploring a redesign of its ELK cluster, it noticed users were only accessing log data that was a few days old. ![]() “Maintaining encryption logic was a lot of work, and it was error prone.” Finally, Zendesk sought to increase its data-retention window. “We had to write and maintain a lot of code to ensure all the data on disk was properly encrypted,” says House. Zendesk also needed to improve its data-encryption capabilities. “That meant our costs were rising, and we didn’t have an easy way to control our storage.” “We were growing fast as a company, so we needed to scale the cluster, but we were using built-in instance storage, and we always had to add more instances if we wanted more storage,” says Kyle House, a senior software engineer at Zendesk. While its AWS architecture was effective for several years, Zendesk eventually needed a better way to scale its ELK cluster. Zendesk uses the ELK stack for logging for its DevOps development model. The company initially deployed its Elasticsearch, Logstash, and Kibana (ELK) big-data stack on dozens of Amazon Elastic Compute Cloud (Amazon EC2) I2 instances using local-instance storage to meet system requirements around memory and disk performance. ![]() “The cloud was the best choice for us, because it matches the agile processes we increasingly have in place here,” says David Bernstein, director, operations services management for Zendesk. Several years ago, Zendesk moved its SaaS platform to the Amazon Web Services (AWS) Cloud.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |