Redcurrant is a Grid of hardware with Storage space (disks), Computing Power (Ram & CPU) to run services. Each node of the grid is the same and run the same services within logical partitions. The architecture of Redcurrant is built with several blocks :

  • Distributed Meta Directory : Locate services on the grid
  • Storage nodes : store contents using the couple (meta-2, raw-x)
  • Conscience : Collects and consolidates health status about services

Logical organization of data

Unlike other known object storage solutions in which objects are referenced by a unique generic id, data objects in Redcurrant are organized with a specific layout.

Namespace and Virtual Namespace

The first level of this layout is the namespace and it represents a storage space which can be dedicated to a client application (even if several applications can access the same namespace). It has some dedicated storage resources and allows to define some specific configuration about features needed by the client application (like chunk_size for example).
Namespaces can also be sub-divided into virtual namespaces. These are basically namespace into namespace with configurable storage quotas. Virtual namespaces are defined like sub-domains : nsA.vns1.vns2…


The second level is the container and it allows to group objects in a common collection (like a directory in a classical file-system). Each container is independent and has no connection with the other containers. It has a unique name in its namespace.


The last level is the object itself (also named content). It has a unique name in its container. Data objects are referenced by a path which can then be resumed by the following URL: Namespace[.VirtualNamespace]/Container/Content


Storage and directory services

Redcurrant considers object data as unstructured bytes and thus can store any kind of data.

The basic principle of Redcurrant storage is to split object data into limited size chunks, store these chunks on classical file-systems using a webdav service (called RawX) and reference them into a distributed directory. The distributed directory was built with the ability to reference an almost unlimited number of objects. To achieve that goal it was split into three levels, each referencing the next one. Each level is handled by a dedicated service.
The first level (called Meta0) has 65536 entries called prefixes, each referencing the second level.
The second level (called Meta1) has an entry per container, each referencing the third level.
The third level (called Meta2) has an entry per content, each referencing data chunks.

The distributed aspect of the directory is managed by splitting its data through several databases. Meta1 level is stored in 65536 db files which can be handled by 1 to 65536 services and concerning the Meta2 level, each container is stored in a dedicated db file.


In order to promote the Grid administration, a smart solution inside the Grid, allows to distribute efficiently and automatically the writing requests, according to equipment’s burden criteria records in real time. We called it conscience of the grid

  • Is unique to a namespace
  • Gives the grid of machines its cohesion
  • Conscience knows everything about every service in the grid
  • Conscience sends service health info to everyone in the grid
  • Use scores to know the services which are part of the namespace and their states; A score is a [0..100] number which represents the usability of a service:
    • 100 : service should be used in priority
    • 0 : service is unavailable and must not be used
  • Has an agent on each node
  • Each agent collects statistics of local services and send them to the conscience

The conscience computes the scores and send the scored list back to each agent Each node always knows the right service to use
Works in background ⇒ Redcurrant still works even if conscience is down


Crawlers are asynchronous batch processes running on Redcurrant nodes. Crawlers have several missions:

  • rebuild lost chunk if DUPLICATION or RAIN were used (see storage policy)
  • change storage policy, and thus optimize data placement / cost
  • move data accross redcurrant nodes, for ops activity or load balancing on all nodes
  • apply deduplication
  • purge deleted contents when versioning is activated
  • ….

At best, those tasks are run in background in order to limit impact on production performance/
Crawlers run at two level:

  • meta2: for content related treatment
  • rawx: for chunk related treatment

Standard Interfaces

Redcurrant manages billions of files and PETA-bytes in a single namespace through an object API. This API hides data management complexity, therefore prevents the user from knowing the physical location of the data. No need to know the server name, the partition/directory where the data is located. User only needs to know the content URL.

Data access in Redcurrant is done via basic verb: Delete, Read, Write, Append; Redcurrant forbids the possibility to modify the stored data; such a change will necessarily be a deletion followed by a new writing. This allows optimization in disk space usage, and avoids bottlenecks and access competitors. This API is available in several development language and will soon be completed by an S3 gateway, so that any S3 compatible client can use Redcurrant with no effort. Please check the roadmap.

User Tools