Whenever analysts or journalists assemble lists of the top trends for this year, “big data” is almost certain to be on the list. While the catchphrase is fairly new, in one sense, big data isn’t really a new concept. Computers have always worked with large and growing sets of data, and we’ve had databases and data warehouses for years.
What is new is how much bigger that data is, how quickly it is growing and how complicated it is. Enterprises understand that the data in their systems represents a gold mine of insights that could help them improve their processes and their performance. But they need tools and a tech consultant that can help them to collect and analyze that data. It is also recommended that enterprises must be GDPR compliant to protect personal data and privacy. For further details on what GDPR and LGPD are all about, tap this link https://www.delphix.com/glossary/lgpd.
Not surprisingly, the big data market is growing very quickly in response to the growing demand from enterprises. According to IDC, the market for big data products and services was worth $3.2 billion in 2010, and they predict the market will grow to hit $16.9 billion by 2015. That’s a 39.4 percent annual growth rate, which is seven times higher than the growth rate IDC expects for the IT market as a whole.
Interestingly, many of the best and best known big data tools available are open source projects. The very best known of these is Hadoop, which is spawning an entire industry of related services and products. This month, we’re profiling Hadoop, as well as 49 other big data projects. Here you’ll find a lot of Apache projects related to Hadoop, as well as nosql databases list, business intelligence tools, development tools and much more.
If we’ve overlooked any important open source big data tools, please feel free to note them over here or in the comments section below.
Big Data Analysis Platforms and Tools
You simply can’t talk about big data without mentioning Hadoop. The Apache distributed data processing software is so pervasive that often the terms “Hadoop” and “big data” are used synonymously. The Apache Foundation also sponsors a number of related projects that extend the capabilities of Hadoop, and many of them are mentioned below. In addition, numerous vendors offer supported versions of Hadoop and related technologies. Operating System: Windows, Linux, OS X.
Originally developed by Google, the MapReduce website describe it as “a programming model and software framework for writing applications that rapidly process vast amounts of data in parallel on large clusters of compute nodes.” It’s used by Hadoop, as well as many other data processing applications. Operating System: OS Independent.
GridGrain offers an alternative to Hadoop’s MapReduce that is compatible with the Hadoop Distributed File System. It offers in-memory processing for fast analysis of real-time data. You can download the open source version from GitHub or purchase a commercially supported version from the link above. Operating System: Windows, Linux, OS X.
Developed by LexisNexis Risk Solutions, HPCC is short for “high performance computing cluster.” It claims to offer superior performance to Hadoop. Both free community versions and paid enterprise versions are available. Operating System: Linux.
Now owned by Twitter, Storm offers distributed real-time computation capabilities and is often described as the “Hadoop of realtime.” It’s highly scalable, robust, fault-tolerant and works with nearly all programming languages. Operating System: Linux.
Originally developed by Facebook, this NoSQL database is now managed by the Apache Foundation. It’s used by many organizations with large, active datasets, including Netflix, Twitter, Urban Airship, Constant Contact, Reddit, Cisco and Digg. Commercial support and services are available through third-party vendors. Operating System: OS Independent.
Another Apache project, HBase is the non-relational data store for Hadoop. Features include linear and modular scalability, strictly consistent reads and writes, automatic failover support and much more. Operating System: OS Independent.
MongoDB was designed to support humongous databases. It’s a NoSQL database with document-oriented storage, full index support, replication and high availability, and more. Commercial support is available through 10gen. Operating system: Windows, Linux, OS X, Solaris.
The “world’s leading graph database,” Neo4j boasts performance improvements up to 1000x or more versus relational databases. Interested organizations can purchase advanced or enterprise versions fromNeo Technology. Operating System: Windows, Linux.
This NoSQL database can store up to 150,000 documents per second and can load graphs in just milliseconds. It combines the flexibility of document databases with the power of graph databases, while supporting features such as ACID transactions, fast indexes, native and SQL queries, and JSON import and export. Operating system: OS Independent.
Based on Terracotta, Terrastore boasts “advanced scalability and elasticity features without sacrificing consistency.” It supports custom data partitioning, event processing, push-down predicates, range queries, map/reduce querying and processing and server-side update functions. Operating System: OS Independent.
Best known as Twitter’s database, FlockDB was designed to store social graphs (i.e., who is following whom and who is blocking whom). It offers horizontal scaling and very fast reads and writes. Operating System: OS Independent.
Used by many telecom companies, Hibari is a key-value, big data store with strong consistency, high availability and fast performance. Support is available through Gemini Mobile. Operating System: OS Independent.
Riak humbly claims to be “the most powerful open-source, distributed database you’ll ever put into production.” Users include Comcast, Yammer, Voxer, Boeing, SEOMoz, Joyent, Kiip.me, DotCloud, Formspring, the Danish Government and many others. Operating System: Linux, OS X.
This NoSQL database offers efficiency and fast performance that result in cost savings versus similar databases. The code is 100 percent open source, but paid support is available. Operating System: Linux, OS X.
This distributed database can run on a single system or scale to hundreds or thousands of machines. Features include dynamic sharding, high performance, high concurrency, high availability and more. Commercial support is available. Operating System: OS Independent.
Hadoop’s data warehouse, Hive promises easy data summarization, ad-hoc queries and other analysis of big data. For queries, it uses a SQL-like language known as HiveQL. Operating System: OS Independent.
This scalable data warehouse supports data stores up to 50TB and offers “market-leading” data compression up to 40:1 for improved performance. Commercial products based on the same technology can be found at InfoBright.com. Operating System: Windows, Linux.
Infinispan from JBoss describes itself as an “extremely scalable, highly available data grid platform.” Java-based, it was designed for multi-core architecture and provides distributed cache capabilities. Operating System: OS Independent.
Sponsored by VMware, Redis offers an in-memory key-value store that can be saved to disk for persistence. It supports many of the most popular programming languages. Operating System: Linux.
Talend makes a number of different business intelligence and data warehouse products, including Talend Open Studio for Big Data, which is a set of data integration tools that support Hadoop, HDFS, Hive, Hbase and Pig. The company also sells an enterprise edition and other commercial products and services. Operating System: Windows, Linux, OS X.
Jaspersoft boasts that it makes “the most flexible, cost effective and widely deployed business intelligence software in the world.” The link above primarily discusses the commercial versions of its applications, but you can find the open source versions, including the Big Data Reporting Tool atJasperForge.org. Operating System: OS Independent.
The open source Palo Suite includes an OLAP Server, Palo Web, Palo ETL Server and Palo for Excel. Jedox offers commercial software based on the same tools. Operating System: OS Independent.
Used by more than 10,000 companies, Pentaho offers business and big data analytics tools with data mining, reporting and dashboard capabilities. Seethe Pentaho Community Wiki for easy access to the open source downloads. Operating System: Windows, Linux, OS X.
SpagoBI claims to be “the only entirely open source business intelligence suite.” Commercial support, training and services are available. Operating System: OS Independent.
The Konstanz Information Miner, or KNIME, offers user-friendly data integration, processing, analysis, and exploration. In 2010, Gartner named KNIME a “Cool Vendor” in analytics, business intelligence, and performance management. In addition to the open source desktop version, several commercial versions are also available. Operating System: Windows, Linux, OS X.
Short for “Business Intelligence and Reporting Tools,” BIRT is an Eclipse-based tool that adds reporting features to Java applications. Actuate is a company that co-founded BIRT and offers a variety of software based on the open source technology. Operating System: OS Independent.
RapidMiner claims to be “the world-leading open-source system for data and text mining.” RapidAnalytics is a server version of that product. In addition to the open source versions of each, enterprise versions and paid support are also available from the same site. Operating System: OS Independent.
This Apache project offers algorithms for clustering, classification and batch-based collaborative filtering that run on top of Hadoop. The project’s goal is to build scalable machine learning libraries. Operating System: OS Independent.
This project hopes to make data mining “fruitful and fun” for both novices and experts. It offers a wide variety of visualizations, plus a toolbox of more than 100 widgets. Operating System: Windows, Linux, OS X.
Short for “Waikato Environment for Knowledge Analysis,” Weka offers a set of algorithms for data mining that you can apply directly to data or use in another Java application. It’s part of a larger machine learning project, and it’s also sponsored by Pentaho. Operating System: Windows, Linux, OS X.
Also known as “jWork,” this Java-based project provides scientists, engineers and students with an interactive environment for scientific computation, data analysis and data visualization. It’s frequently used in data mining, as well as for mathematics and statistical analysis. Operating System: OS Independent.
KEEL stands for “Knowledge Extraction based on Evolutionary Learning,” and it aims to help uses assess evolutionary algorithms for data mining problems like regression, classification, clustering and pattern mining. It includes a large collection of existing algorithms that it uses to compare and with new algorithms. Operating System: OS Independent.
Another Java-based data mining framework, SPMF originally focused on sequential pattern mining, but now also includes tools for association rule mining, sequential rule mining and frequent itemset mining. Currently, it includes 46 different algorithms. Operating System: OS Independent.
Rattle, the “R Analytical Tool To Learn Easily,” makes it easier for non-programmers to use the R language by providing a graphical interface for data mining. It can create data summaries (both visual and statistical), build models, draw graphs, score datasets and more. Operating System: Windows, Linux, OS X.
Sponsored by Red Hat, Gluster offers unified file and object storage for very large datasets. Because it can scale to 72 brontobytes, it can be used to extend the capabilities of Hadoop beyond the limitations of HDFS (see below). Operating System: Linux.
Also known as HDFS, this is the primary storage system for Hadoop. It quickly replicates data onto several nodes in a cluster in order to provide reliable, fast performance. Operating System: Windows, Linux, OS X.
39. Pig/Pig Latin
Another Apache Big Data project, Pig is a data analysis platform that uses a textual language called Pig Latin and produces sequences of Map-Reduce programs. It helps makes it easier to write, understand and maintain programs which conduct data analysis tasks in parallel. Operating System: OS Independent.
Developed by Bell Laboratories, R is a programming language and an environment for statistical computing and graphics that is similar to S. The environment includes a set of tools that make it easier to manipulate data, perform calculations and generate charts and graphs. Operating System: Windows, Linux, OS X.
ECL (“Enterprise Control Language”) is the language for working with HPCC. A complete set of tools, including an IDE and a debugger are included in HPCC, and documentation is available on the HPCC site. Operating System: Linux.
Big Data Search
The self-proclaimed “de facto standard for search libraries,” Lucene offers very fast indexing and searching for very large datasets. In fact, it can index over 95GB/hour when using modern hardware. Operating System: OS Independent.
Solr is an enterprise search platform based on the Lucene tools. It powers the search capabilities for many large sites, including Netflix, AOL, CNET and Zappos. Operating System: OS Independent.
Data Aggregation and Transfer
Sqoop transfers data between Hadoop and RDBMSes and data warehouses. As of March of this year, it is now a top-level Apache project. Operating System: OS Independent.
Another Apache project, Flume collects, aggregates and transfers log data from applications to HDFS. It’s Java-based, robust and fault-tolerant. Operating System: Windows, Linux, OS X.
Built on top of HDFS and MapReduce, Chukwa collects data from large distributed systems. It also includes tools for displaying and analyzing the data it collects. Operating System: Linux, OS X.
Miscellaneous Big Data Tools
Terracotta’s “Big Memory” technology allows enterprise applications to store and manage big data in server memory, dramatically speeding performance. The company offers both open source and commercial versions of its Terracotta platform, BigMemory, Ehcache and Quartz software. Operating System: OS Independent.
Apache Avro is a data serialization system based on JSON-defined schemas. APIs are available for Java, C, C++ and C#. Operating System: OS Independent.
This Apache project is designed to coordinate the scheduling of Hadoop jobs. It can trigger jobs at a scheduled time or based on data availability. Operating System: Linux, OS X.
Formerly a Hadoop sub-project, Zookeeper is “a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.” APIs are available for Java and C, with Python, Perl, and REST interfaces planned. Operating System: Linux, Windows (development only), OS X (development only).
This article is orignated from [SOURCE]