The MarkLogic Data Hub is a software interface that works to ingest data from multiple sources, harmonize that data, master it, and then search and analyze it. It runs on MarkLogic Server, and together, they provide a unified platform for mission-critical use cases.
MarkLogic Data Hub v4 and v5 are open-source software interfaces that work to ingest data from multiple sources, harmonize that data, master it, and then search and analyze it. They run on MarkLogic Server, and together, they provide a unified platform for mission-critical use cases.
Atomicity, consistency, isolation, and durability; these properties ensure that your enterprise-grade system never encounters issues like data corruption, stale reads, and inconsistent data.
NiFi Connectors are a collaborative project between MarkLogic Engineering and the community. The project includes many cookbooks and recipes for using NiFi to work with MarkLogic Server and MarkLogic Data Hub Framework.
A fully conformant Atom Publishing Protocol server in XQuery on top of MarkLogic Server.
Supports the multi-step conversion processes of applications.
Our document database model is ideal for handling varied and complex data, providing native storage for JSON, XML, RDF, geospatial, and large binaries. You can load your data "as-is" no matter how complex it is.
Natively store, manage, and search geospatial data—including points of interest, intersecting paths, and regions of interest, all on a single platform, with powerful geospatial search capabilities at hand!
Hadoop is an open-source framework for distributed processing of large data sets across clusters of ...
Uses the standard Kafka APIs and libraries to subscribe to Kafka topics and consume messages.
The MarkLogic connector for Apache Spark enables users to query data in MarkLogic, manipulate it using Spark operations, and write back to MarkLogic.
MarkLogic Content Pump (MLCP) is an open-source, Java-based command-line tool. MLCP provides the fas...
A Java-based tool for importing data from MongoDB into MarkLogic.
A fully managed, fully automated cloud service to integrate data from silos. Powered by MarkLogic Server, the service enables agile teams to start integrating and curating data immediately, with no infrastructure to buy or manage.
A Java tool designed for bulk content-reprocessing of documents stored in MarkLogic. CORB stands for Content Reprocessing in Bulk and is a multi-threaded workhorse tool at your disposal.
This website uses cookies.

By continuing to use this website you are giving consent to cookies being used in accordance with the MarkLogic Privacy Statement.