ELK intro and Elasticsearch lessons from production
ELK stands for ElasticSearch, Logstash and Kibana. I had become acquainted with this during MongoDB Day – Bangalore on 19 May 2014 by Susheel Zaveri’s excellent talk. So, I was overjoyed, when the Elasticsearch Meetup Bangalore’s First Meetup coincided with my trip on 27 Sep 2014. Elasticsearch has got an open, RESTful API that makes it easy to build applications on top of it. It can process both structured and unstructured data, so you can derive insights from log files to Tweets to plain old CSV files, all in near real-time. Best of all, you can ingest data from all these disparate sources easily into Logstash, then search and analyze across all of these types of data with Elasticsearch, visualizing the results using Kibana. This stack makes these insights available to anyone in an organization through Kibana’s dashboards, which are share-able and don’t require programming know-how to use effectively.
These features – plus many more – make the ELK stack so flexible that it meets the big data challenges of a wide variety of verticals. A major financial company uses the ELK stack to do anomaly detection and root out credit card fraud. Another one performs analytics and sentiment analysis across social media data. Yet another one detects hacking on their networks, and yet another for full-text search across e-commerce sites with billions of entries.
The meetup was held at SpringPeople Software Pvt Ltd, Sector 7, HSR Layout, Bengaluru, Karnataka. It had 2 speakers: Suyog Rao, Vedang Manerikar. It was free of cost, but required registration in a Google Form. Suyog Rao (@suyograo) started with an introduction to ELK. He started describing ElasticSearch as a schema-free, REST and JSON document store. The salient points of his talk were:
- The popularity of ElasticSearch can be gauged from the total number of downloads, which stands at 10M in last 2 years.
- An Elastic Search cluster can contain multiple Indices(databases), which in turn contain multiple Types(tables). These types hold multiple Documents (rows), and each document has Properties(columns). [Terms in bracket are relational counterpart]
- It uses replication for high availability and performance. For horizontal scalability, it uses sharding.
- It supports:
- Unstructured as well as Faceted, structured search
- Enrichment and sorting
- Pagination and Aggregation
He covered Logstash and Kibana next.
- Logstash is a ruby app, which runs on JVM.
- It allows one to collect, parse, enrich and store logs and events.
- Kibana allows one to have beautiful visualization on top of Elasticsearch index with zero code.
- The new version makes use D3 library.
He showed a quick demo. Actually covered a lot of stuff in short time.
Vedang Manerikar (@vedang) works with Helpshift, a mobile CRM company based out of Pune and San Francisco. [It’s a company, which has unique hiring practices. Refer my earlier blogpost on Building Silicon Valley culture in India]
The customer-facing side of Helpshift product is a simple chat feature within the app using the Helpshift mobile SDK. The business-facing side is a complex agent dashboard that helps the agent in processing as many issues as quickly as possible. This business-facing side is built on top of Elasticsearch. He shared the following nuggets of wisdom with us:
- Elasticsearch does not have a book on it, although it will soon be solved. There are good references and videos, but nothing structured like a book yet.
- Don’t use Elasticsearch as a primary database. The data should first go into mysql, MongoDB or other transactional datastore.
- Though ES allows one to have a mixed mode node with both meta data and data, it is best to separate master and data nodes.
- For multi-tenant index like Helpshift’s usecase, an index per customer is not a good idea, but something based on the index size.
- He said helpful steps about bulk loading like controlling replica count etc, but I did not catch it fully.
- Rolling upgrade of ES is fraught with risks, so it is better to spin up new cluster and decommission old one. [This was contested by Suyog and Drew]
- Benchmarking is hugely important and should be done at staging and development phase to prevent aches during production. He mentioned about a tool called Tsung, which helped them benchmark percolators. Percolators allowed live notifications of new issues.
- During runtime, a lot of debugging can be done using cat API’s, so make use of them.
- Tune JVM parameters, like allocate more memory for young generation.
- ES uses Lucene under the hood, so some troubleshooting might require understanding its working as well
- RTFM – Basically read manual carefully. Pay special attention to the unit, whether a particular number refers to ms or seconds.
- Advanced ES users make use of filters to make complex views.
- There were many others, but I guess we have to wait for the presentation to arrive.
Posted on October 8, 2014, in Bangalore, Big Data, Big Data Analytics, Document Database, MongoDB, NoSQL, Pune, Technology and tagged Big Data, Database, Document Database, Elasticsearch. Bookmark the permalink. Leave a comment.