Installation guide

Introduction

The goal of this document is to describe a proper way to install Redcurrant product in a single node installation mode. The reader is expected to be familiar with Redcurrant concepts (namespace, container, content) and services (meta0, meta1, meta2, rawx, conscience …) otherwise read the page Architecture before.

Requirements

Hardware

There is no hardware requirement except for a x86 CPU with 10GB disk space This single node installation procedure expects neither any specific storage device nor any capacity and can be executed inside a VM.

Software

Operating System: CentOS 5 or 6, 64b (advised) Sun/Oracle JVM, version 1.6 + Yum repositories with Redcurrant software as well as EPEL must be available from the server. Yum configuration is not part of the present document.

Conventions

The following notation will be used across the document. Command lines to be executed are in bordered paragraph, grey background, ‘courrier new’ font.

  root@mysystem:> Run_this_command this_parameter

Custom parameters are always preceded by a $ char. Example:

  $tcp_port

Installation set

The minimum installation set for Redcurrant 1.8.4 is composed of:

  • 1 Zookeeper
  • 1 Meta0 service
  • 3 Meta1 services
  • 1 Conscience
  • 1 Gridinit
  • 1 Gridagent

Such a reduced installation only allows container management and score management.

The present document will provide a guide toward a more complete installation allowing content management, with:

  • 3 Meta2
  • 6 Rawx
  • 3 sqlx
  • 2 rainx

Path convention

Redcurrant relies on two main root directories, each one which a tree convention.

/GRIDConfiguration files
/GRID/$hostnameConfiguration for the given host, usually the local host
/GRID/$hostname/confConfiguration file of server dependent process, like gridinit, gridagent
/GRID/$namespaceConfiguration for the given namespace
/GRID/$namespace/$storagedeviceConfiguration for the given namespace and belonging to the given storge device
/GRID/$namespace/$storagedevice/confConfiguration file of namespace dependent process, like meta0, meta1 meta2 conscience
/DATAPersistant storages
/DATA/$namespacePersistant storage for the given namespace
/DATA/$namespace/$storagedevicePersistant storage for the given namespace
/DATA/$namespace/$storagedevice/$serviceDepending on the service, this mountpoint must be sized appropriately

Note

Please note that commands output in the following examples might slightly differ sometimes, mainly due to IP address, node name, namespace name.

Please refer to Redcurrant release notes for the description of the new features and concepts embedded in this release

Installation

In this tutorial, we will configure services to listen on TCP as follow

Process Port Later referenced as
Conscience6000$port_conscience
Meta06001$port_meta0
Meta16002 6003 6004$port_meta1
Meta26021 6022 6023$port_meta2
Rawx6007 6008 6015 6026 6027 6028$port_rawx
Sqlx6014 6016 6017$port_sqlx
Rainx6024 6025$port_rainx
Vnsagent6005$port_vnsagent

Packages installation

  [root@singlenode ~]# yum install redcurrant-server redcurrant-common redcurrant-client
  [root@singlenode ~]# yum install zookeeper
  [root@singlenode ~]# yum install redcurrant-grid-init.x86_64
  [root@singlenode ~]# yum install redcurrant-mod-httpd.x86_64
  [root@singlenode ~]# yum install redcurrant-mod-httpd-rainx.x86_64
  [root@singlenode ~]# yum install redcurrant-rsyslog.noarch

BEWARE: Zookeeper requires Sun jdk, and does not work well with gcj / openjdk. Be sure to check that default java is the Sun edition, version >= 1.6

  [root@singlenode ~]# java -version
  java version "1.6.0_29"
  Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
  Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode)

Zookeeper

Create /etc/gridstorage.conf.d/$namespace

  [$namespace]
  conscience=$ip:$port_conscience
  zookeeper=$ip:2181

Start zookeeper at boot

  [root@singlenode ~]# chkconfig zookeeper on
  [root@singlenode ~]# service zookeeper start

Now initiates the structure of Zookeeper’s content

  [root@singlenode ~]# zk-bootstrap.py $namespace

Gridinit and gridagent

Prepare directory

  [root@singlenode ~]# mkdir -p /GRID/$singlenode/{conf,core,logs,run,spool}
  [root@singlenode ~]# chown -R admgrid:admgrid /GRID/$singlenode/
  [root@singlenode ~]# mkdir /GRID/common/run
  [root@singlenode ~]# chown admgrid:admgrid  /GRID/common/run

Prepare configuration

  [root@singlenode conf]# cd /GRID/$singlenode/conf/
  [root@singlenode conf]# cp /GRID/common/conf/gridinit.* .

Replace /GRID/common by /GRID/$singlenode in the copied configuration files.

  [root@singlenode conf]# sed -i -e "s@/GRID/common@/GRID/$singlenode@g" *

Prepare gridinit configuration directory. Gridinit will process each file found there:

  [root@singlenode conf]# mkdir gridinit.conf.d

Create a service file for gridagent in /GRID/$singlenode/conf/gridinit.conf.d/gridagent

  [Service.gridagent]
  command=/usr/local/bin/gridagent --supervisor /GRID/common/conf/gridagent.conf /GRID/common/conf/gridagent.log4crc
  enabled=true
  start_at_boot=yes
  on_die=respawn
  group=common,common,$singlenode,gridagent
  env.PATH=/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin

Now start gridinit

  [root@singlenode gridinit.conf.d]# service gridinit start

And check status

  [root@singlenode gridinit.conf.d]# gridinit_cmd status
  KEY       STATUS      PID GROUP
  gridagent UP         3282 common,common,$singlenode,gridagent

From now on, gridinit and gridagent are ready.

Start gridinit at boot

  [root@singlenode ~]# chkconfig --add gridinit
  [root@singlenode ~]# chkconfig gridinit on

Next service will be namespace bound. Prepare namespace level directories

  [root@singlenode gridinit.conf.d]# mkdir –p /GRID/$namespace/{core,run,logs,$singlenode}
  [root@singlenode gridinit.conf.d]# mkdir –p /GRID/$namespace/$singlenode/{core,conf,logs,run}
  [root@singlenode gridinit.conf.d]# chown -R admgrid:admgrid /GRID

Conscience

Create a service file for conscience (/GRID/$singlenode/conf/gridinit.conf.d/conscience-1)

[Service.$namespace-$singlenode-conscience-1]
command=/usr/local/bin/gridd /GRID/$namespace/$singlenode/conf/conscience-1.conf /GRID/$namespace/$singlenode/conf/conscience-1.log4crc
enabled=true
start_at_boot=yes
on_die=respawn
group=$namespace,$singlenode,$singlenode,conscience
env.PATH=/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin

The file reference namespace wide configuration. Prepare directories:

[root@singlenode gridinit.conf.d]# mkdir -p /GRID/$namespace/$singlenode/{conf,run}
[root@singlenode gridinit.conf.d]# chown -R admgrid:admgrid /GRID/$namespace/$singlenode

Create conscience configuration file /GRID/$namespace/$singlenode/conf/conscience-1.conf

[General]
daemon=false
to_op=300000
to_cnx=300000
pidfile=/GRID/$namespace/$singlenode/run/conscience-1.pid

[Service]
namespace=$namespace
type=conscience
register=false
load_ns_info=false

[Server.conscience]
min_workers=30
min_spare_workers=30
max_spare_workers=100
max_workers=200
listen=$port_conscience
plugins=conscience,stats,fallback

[Plugin.stats]
path=/usr/local/lib64/grid/msg_stats.so

[Plugin.fallback]
path=/usr/local/lib64/grid/msg_fallback.so

[Plugin.conscience]
path=/usr/local/lib64/grid/msg_conscience.so
param_namespace=$namespace
param_chunk_size=10485760
param_score_timeout=300
param_meta0=$ip:$port_meta0
param_events=/GRID/$namespace/$singlenode/conf/conscience-1.events
param_service.default.score_timeout=300
param_service.default.score_variation_bound=5
param_service.default.score_expr=100
param_service.sqlx.score_timeout=300
param_service.rainx.score_timeout=300

Create event configuration file /GRID/$namespace/$singlenode/conf/conscience-1.events

*=drop

Create log4c configuration file /GRID/$namespace/$singlenode/conf/conscience-1.log4crc

<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE log4c SYSTEM "">
<log4c>
  <config>
      <bufsize>2048</bufsize>
      <debug level="4"/>
      <nocleanup>0</nocleanup>
  </config>
  <category name="root" priority="info" appender="RC,$singlenode,$singlenode,conscience-1"/>
  <appender name="RC,$singlenode,$singlenode,conscience-1" type="syslog" layout="basic_r"/>
  <layout name="basic_r" type="basic_r"/>
</log4c>

Now reload gridinit service config:

[root@singlenode gridinit.conf.d]# gridinit_cmd reload

Check status:

[root@singlenode gridinit.conf.d]# gridinit_cmd status
KEY                               STATUS      PID GROUP
gridagent                           UP         3282 common,common,$singlenode,gridagent
$namespace-$singlenode-conscience-1 UP         10762 $namespace,$singlenode$,singlenode,conscience

Request status from conscience

[root@singlenode gridinit.conf.d]# redc-cluster $namespace
NAMESPACE INFORMATION
                Name : $namespace
        Chunk size : 10485760 bytes
              Option : writable_vns = $namespace
-- meta0 --
    $ip:$port_meta0           100
-- meta2 --
-- meta1 --
-- solr --
-- rawx –

From now on, conscience is ready.

Meta0

Create a service file for meta0 (/GRID/$singlenode/conf/gridinit.conf.d/meta0-1)

[Service.$namespace-$singlenode-meta0-1]
command=/usr/local/bin/meta0_server -p /GRID/$namespace/$singlenode/run/meta0-1.pid -s RC,$namespace,$singlenode,meta0-1 -O Endpoint=$ip:$port_meta0 $namespace /DATA/$namespace/$singlenode/meta0-1
enabled=true
start_at_boot=yes
on_die=respawn
group=$namespace,$singlenode,$singlenode,meta0
env.PATH=/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin

Prepare data directory

[root@singlenode gridinit.conf.d]# mkdir -p /DATA/$namespace/$singlenode/meta0-1
[root@singlenode gridinit.conf.d]# chown -R admgrid:admgrid /DATA

Reload gridinit

[root@singlenode gridinit.conf.d]# gridinit_cmd reload

Check status

[root@singlenode gridinit.conf.d]# gridinit_cmd status
KEY                            STATUS      PID GROUP
gridagent                      UP         3199 common,common,singlenode,gridagent
TESTNS-singlenode-conscience-1 UP         3198 TESTNS,singlenode,singlenode,conscience
TESTNS-singlenode-meta0-1      UP         3374 TESTNS,singlenode,singlenode,meta0

Meta1

Create a service file for each meta1 (/GRID/$singlenode/conf/gridinit.conf.d/meta1-{1,2,3})

[Service.$namespace-$singlenode-meta1-1]
command=/usr/local/bin/meta1_server -p /GRID/$namespace/$singlenode/run/meta1-1.pid -s RC,$namespace,$singlenode,meta1-1 -O Endpoint=$ip:$meta1_port $namespace /DATA/$namespace/$singlenode/meta1-1
enabled=true
start_at_boot=yes
on_die=respawn
group=$namespace,$singlenode,$singlenode,meta1
env.PATH=/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin

Prepare data directory

[root@singlenode gridinit.conf.d]# mkdir /DATA/$namespace/$singlenode/meta1-{1,2,3}
[root@singlenode gridinit.conf.d]# chown admgrid:admgrid /DATA/$namespace/$singlenode/meta1*

Reload gridinit

[root@singlenode gridinit.conf.d]# gridinit_cmd reload

Check status

[root@singlenode gridinit.conf.d]# gridinit_cmd status
KEY                            STATUS      PID GROUP
gridagent                      UP         3199 common,common,singlenode,gridagent
TESTNS-singlenode-conscience-1 UP         3198 TESTNS,singlenode,singlenode,conscience
TESTNS-singlenode-meta0-1      UP         3374 TESTNS,singlenode,singlenode,meta0
TESTNS-singlenode-meta1-1      UP         3429 TESTNS,singlenode,singlenode,meta1
TESTNS-singlenode-meta1-2      UP         3431 TESTNS,singlenode,singlenode,meta1
TESTNS-singlenode-meta1-3      UP         3430 TESTNS,singlenode,singlenode,meta1

Check conscience. Meta1 services are now displayed

[root@singlenode gridinit.conf.d]# redc-cluster $namespace
NAMESPACE INFORMATION
                Name : TESTNS
          Chunk size : 10485760 bytes
              Option : writable_vns = TESTNS
-- meta0 --
192.168.244.143:6001                     100
-- meta2 --
-- meta1 --
192.168.244.143:6004                      98
192.168.244.143:6003                      98
192.168.244.143:6002                      98
-- rawx –

Initialize HC

Meta0

[root@singlenode conf]# meta0_init -O NbReplicas=3 -O IgnoreDistance=on $namespace
  1335779989.114 03547 5228 INF g.m.c   META0 filled!
[root@singlenode conf]# meta0_client $namespace reload

Meta2

Create a service file for each meta2 (/GRID/$singlenode/conf/gridinit.conf.d/meta2-{1,2,3})

[Service.$namespace-$singlenode-meta2-1]
command=/usr/local/bin/meta2_server -p /GRID/$namespace/$singlenode/run/meta2-1.pid -s RC,$namespace,$singlenode,meta2-1 -O Endpoint=$ip:$port_meta2 –O Tag=location=$datacenter.$room.$rack.$cluster.$server.$volume $namespace /DATA/$namespace/$singlenode/meta2-1
enabled=true
start_at_boot=yes
on_die=respawn
group=$namespace,$singlenode,$singlenode,meta2
env.PATH=/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin

Tag location is not mandatory for meta2_service. Nevertheless, it can help improving data availability in case of major disaster, by forcing replication between services at the desired distance.

Prepare data directory

[root@singlenode conf]# mkdir /DATA/$namespace/$singlenode/meta2-{1,2,3}
[root@singlenode conf]# chown admgrid:admgrid /DATA/$namespace/$singlenode/meta2*

Reload gridinit

[root@singlenode conf]# gridinit_cmd reload

Check status

[root@singlenode conf]# gridinit_cmd status
KEY                            STATUS      PID GROUP
gridagent                      UP         3199 common,common,singlenode,gridagent
TESTNS-singlenode-conscience-1 UP         3198 TESTNS,singlenode,singlenode,conscience
TESTNS-singlenode-meta0-1      UP         3374 TESTNS,singlenode,singlenode,meta0
TESTNS-singlenode-meta1-1      UP         3429 TESTNS,singlenode,singlenode,meta1
TESTNS-singlenode-meta1-2      UP         3431 TESTNS,singlenode,singlenode,meta1
TESTNS-singlenode-meta1-3      UP         3430 TESTNS,singlenode,singlenode,meta1
TESTNS-singlenode-meta2-1      UP         3500 TESTNS,singlenode,singlenode,meta2
TESTNS-singlenode-meta2-2      UP         3501 TESTNS,singlenode,singlenode,meta2
TESTNS-singlenode-meta2-3      UP         6837 TESTNS,singlenode,singlenode,meta2

Check conscience

[root@singlenode conf]# redc-cluster TESTNS
NAMESPACE INFORMATION
                Name : TESTNS
          Chunk size : 10485760 bytes
              Option : writable_vns = TESTNS
-- meta0 --
192.168.244.143:6001                     100
-- meta2 --
192.168.244.143:6023                      18
192.168.244.143:6022                      18
192.168.244.143:6021                      18
-- meta1 --
192.168.244.143:6004                      25
192.168.244.143:6003                      25
192.168.244.143:6002                      25

Container purge crawler

Coming with the new meta2v2, a container crawler is needed to handle deletion of chunks which were used in deleted, deduplicated or out of versions contents. This crawler is composed of two parts:

  • A purge service which is unique per storage node and multi-namespace
  • A container crawler which is dedicated to a meta2 data volume

To start the purge service, create the following gridinit service config file: /GRID/$singlenode/conf/gridinit.conf.d/action-purge-1

[Service.common-action-purge-1]
command=/usr/local/bin/action_purge_container_service -s "RC,common,common,action_purge-1"
enabled=true
on_die=respawn
group=common,action-purge
env.PATH=/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin

To start the container crawler, create a gridinit service config file like the following for each meta2 service: /GRID/$singlenode/conf/gridinit.conf.d/crawler-purge-{1,2,3}

[Service.$namespace-crawler-purge-1]
command=/usr/local/bin/redc-crawler -s "RC,$namespace,$singlenode,crawler_purge-1" -Otrip=trip_container -Oaction=action_purge_container -- -trip_container.s=/DATA/$namespace/$singlenode/meta2-1 -action_purge_container.n=$namespace -trip_container.infinite=on
enabled=true
on_die=respawn
group=$namespace,crawler-purge
env.PATH=/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin

Restart dbus to apply new security policies:

[root@singlenode conf]# service messagebus restart

Reload gridinit to start services:

[root@singlenode conf]# gridinit_cmd reload

Container deduplication crawler

Coming with the new meta2v2, a deduplication crawler is available to deduplicate identical contents at a container level. This crawler is composed of two parts:

  • A deduplication service which is unique per storage node and multi-namespace
  • A container crawler which is dedicated to a meta2 data volume

To start the dedup service, create the following gridinit service config file: /GRID/$singlenode/conf/gridinit.conf.d/action-dedup-1

[Service.common-action-dedup-1]
command=/usr/local/bin/action_dedup_container_service -s "RC,common,common,action_dedup-1"
enabled=true
on_die=respawn
group=common,action-dedup
env.PATH=/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin

To start the deduplication crawler, create a gridinit service config file like the following for each meta2 service: /GRID/$singlenode/conf/gridinit.conf.d/crawler-dedup-{1,2,3}

[Service.$namespace-$singlenode-crawler-dedup-1]
command=/usr/local/bin/redc-crawler -s "RC,$namespace,$singlenode,crawler_dedup-1" -Otrip=trip_container -Oaction=action_dedup_container -- -trip_container.s=/DATA/$namespace/$singlenode/meta2-1 -action_dedup_container.n=$namespace -trip_container.infinite=on
enabled=true
on_die=respawn
group=common,crawler-dedup
env.PATH=/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin

RAWX

Create a service file for each rawx in /GRID/$singlenode/conf/gridinit.conf.d/rawx-{1,2,3,4,5,6}:

[Service.$namespace-$singlenode-rawx-1]
command=/usr/local/bin/redc-rawx-monitor /GRID/$namespace/$singlenode/conf/rawx-1-monitor.conf /GRID/$namespace/$singlenode/conf/rawx-1-monitor.log4crc
enabled=true
start_at_boot=yes
on_die=respawn
group=$namespace,$singlenode,$singlenode,rawx
env.PATH=/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin

Create a configuration for each rawx monitor in /GRID/$namespace/$singlenode/conf/rawx-{1,2,3,4,5,6}-monitor.conf

[Default]
daemon=false
pidfile=/GRID/$namespace/$singlenode/run/rawx-1-monitor.pid

[Child]
command=/usr/sbin/httpd.worker -D FOREGROUND -f /GRID/$namespace/$singlenode/conf/rawx-1-httpd.conf
respawn=true
rlimit.stack_size=1048576
rlimit.core_size=-1
rlimit.max_files=32768

[Service]
ns=$namespace
type=rawx
addr=$ip:$rawx_port
location=$datacenter.$room.$rack.$cluster.$server.$volume
stgclass=$storage_class_of_the_device

[Volume]
docroot=/DATA/$namespace/$singlenode/rawx-1

Do not forget directives location and stgclass, mandatory in Redcurrant 1.7+. Location specifies the physical location of the rawx service, used by storage policy engine. Please refer to the administration guide for details on URL syntax. Stgclass specifies the type of storage used by this rawx: this is a free label but must be self-explanatory like LOCAL_SATA.

Create a httpd config file for each rawx in /GRID/$namespace/$singlenode/conf/rawx-{1,2,3,4,5,6}-httpd.conf

LoadModule dav_module        /usr/lib64/httpd/modules/mod_dav.so
LoadModule dav_rawx_module   /usr/lib64/httpd/modules/mod_dav_rawx.so
LoadModule mime_module       /usr/lib64/httpd/modules/mod_mime.so
LoadModule log_config_module /usr/lib64/httpd/modules/mod_log_config.so
LoadModule logio_module      /usr/lib64/httpd/modules/mod_logio.so

Listen $ip:$rawx_port
PidFile /GRID/$namespace/$singlenode/run/rawx-1-httpd.pid
ServerRoot /GRID/$namespace/$singlenode/core
ServerName localhost
ServerSignature Off
ServerTokens Prod
DocumentRoot /GRID/$namespace/$singlenode/run
TypesConfig /etc/mime.types

User  admgrid
Group admgrid

LogFormat "%h %l %t \"%r\" %>s %b %I %D" log/common
ErrorLog /GRID/$namespace/$singlenode/logs/rawx-1-httpd-errors.log
CustomLog /GRID/$namespace/$singlenode/logs/rawx-1-httpd-access.log log/common
LogLevel info

<IfModule mod_env.c>
  SetEnv nokeepalive 1
  SetEnv downgrade-1.0 1
  SetEnv force-response-1.0 1
</IfModule>

<IfModule prefork.c>
  MaxClients       150
  StartServers       5
  MinSpareServers    5
  MaxSpareServers   10
</IfModule>

<IfModule worker.c>
  StartServers           5
  MaxClients           100
  MinSpareThreads        5
  MaxSpareThreads       25
  ThreadsPerChild       10
  MaxRequestsPerChild    0
</IfModule>
 
DavDepthInfinity Off

grid_hash_width 2
grid_hash_depth 2
grid_docroot /DATA/$namespace/$singlenode/rawx-1
grid_namespace $namespace
grid_dir_run /GRID/$namespace/$singlenode/run

<Directory />
    DAV rawx
    AllowOverride None
</Directory>
<VirtualHost $ip:$rawx_port>
    # DO NOT REMOVE (even if empty) !
</VirtualHost>

Create rawx directory. Chunk will be stored there

[root@singlenode gridinit.conf.d]# mkdir /DATA/$namespace/$singlenode/rawx-{1,2,3,4,5,6}
[root@singlenode gridinit.conf.d]# chown admgrid:admgrid /DATA/$namespace/$singlenode/rawx*

Reload gridinit

[root@singlenode conf]# gridinit_cmd reload

Check status

[root@singlenode gridinit.conf.d]# gridinit_cmd status
KEY                            STATUS      PID GROUP
gridagent                      UP         3571 common,common,singlenode,gridagent
TESTNS-singlenode-conscience-1 UP         3596 TESTNS,singlenode,singlenode,conscience
TESTNS-singlenode-meta0-1      UP         3567 TESTNS,singlenode,singlenode,meta0
TESTNS-singlenode-meta1-1      UP         3628 TESTNS,singlenode,singlenode,meta1
TESTNS-singlenode-meta1-2      UP         3630 TESTNS,singlenode,singlenode,meta1
TESTNS-singlenode-meta1-3      UP         3629 TESTNS,singlenode,singlenode,meta1
TESTNS-singlenode-meta2-1      UP         3661 TESTNS,singlenode,singlenode,meta2
TESTNS-singlenode-meta2-2      UP         3662 TESTNS,singlenode,singlenode,meta2
TESTNS-singlenode-rawx-1         UP         5287 TESTNS,singlenode,singlenode,rawx
TESTNS-singlenode-rawx-2         UP         5363 TESTNS,singlenode,singlenode,rawx
TESTNS-singlenode-rawx-3         UP         1905 TESTNS,singlenode,singlenode,rawx
TESTNS-singlenode-rawx-4         UP         1903 TESTNS,singlenode,singlenode,rawx

SQLX

Create a service file for each sqlx in /GRID/$singlenode/conf/gridinit.conf.d/sqlx-{1,2,3}

[Service.$namespace-$singlenode-sqlx-1]
command=/usr/local/bin/sqlx_server -s 'RC,$namespace,$singlenode,sqlx-1' -O Endpoint=$ip:$port_sqlx $namespace /DATA/$namespace/$singlenode/sqlx-1
enabled=true
start_at_boot=yes
on_die=respawn
group=$namespace,$singlenode,$singlenode,sqlx
env.PATH=/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin

Create data directory

[root@singlenode ~]# mkdir -p /DATA/$namespace/$singlenode/sqlx-{1,2,3}
[root@singlenode ~]# chown –R admgrid:admgrid /DATA/$namespace/$singlenode/sqlx-*

Reload gridinit and check status

RAINX

Rainx is implemented as a webdav service, like rawx. Thus rainx is started with gridinit via rawx-monitor.

Create a service file for each rainx in /GRID/$singlenode/conf/gridinit.conf.d/rainx-{1,2}.

[Service.$namespace-$singlenode-rainx-1]
command=/usr/local/bin/redc-rawx-monitor /GRID/$namespace/$singlenode/conf/rainx-1-monitor.conf /GRID/$namespace/$singlenode/conf/rainx-1-monitor.log4crc
enabled=true
start_at_boot=yes
on_die=respawn
group=$namespace,$singlenode,$singlenode,rainx
env.PATH=/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin

Create configuration for each rainx monitor in /GRID/$namespace/$singlenode/conf/rainx-{1,2}-monitor.conf:

[Default]
daemon=false
pidfile=/GRID/$namespace/$singlenode/run/rainx-1-monitor.pid

[Child]
command=/usr/sbin/httpd.worker -D FOREGROUND -f /GRID/$namespace/$singlenode/conf/rainx-1-httpd.conf
respawn=true
rlimit.stack_size=1048576
rlimit.core_size=-1
rlimit.max_files=32768

[Service]
ns=$namespace
type=rainx
addr=$ip:$port_rainx

Create httpd config file for each rainx in /GRID/$namespace/$singlenode/conf/rainx-{1,2}-httpd.conf:

LoadModule dav_module        /usr/lib64/httpd/modules/mod_dav.so
LoadModule dav_rainx_module   /usr/lib64/httpd/modules/mod_dav_rainx.so
LoadModule mime_module          /usr/lib64/httpd/modules/mod_mime.so
LoadModule log_config_module /usr/lib64/httpd/modules/mod_log_config.so
LoadModule logio_module         /usr/lib64/httpd/modules/mod_logio.so

Listen $ip:$port_rainx
PidFile /GRID/$namespace/$singlenode/run/rainx-1-httpd.pid
ServerRoot /GRID/$namespace/$singlenode/core
ServerName localhost
ServerSignature Off
ServerTokens Prod
DocumentRoot /GRID/$namespace/$singlenode/run
TypesConfig /etc/mime.types
User  admgrid
Group admgrid
LogFormat "%h %l %u %t \"%r\" %>s %b %D %I" log/common
ErrorLog /GRID/$namespace/$singlenode/logs/rainx-1-httpd-errors.log
CustomLog /GRID/$namespace/$singlenode/logs/rainx-1-httpd-access.log log/common
LogLevel info
<IfModule mod_env.c>
    SetEnv nokeepalive 1
    SetEnv downgrade-1.0 1
    SetEnv force-response-1.0 1
</IfModule>
<IfModule prefork.c>
    MaxClients       150
    StartServers       5
    MinSpareServers    5
    MaxSpareServers   10
</IfModule>
<IfModule worker.c>
    StartServers           5
    MaxClients           100
    MinSpareThreads        5
    MaxSpareThreads       25
    ThreadsPerChild       10
    MaxRequestsPerChild    0
</IfModule>
DavDepthInfinity Off
grid_namespace $namespace
grid_dir_run /GRID/$namespace/$singlenode/run
<Directory />
    DAV rainx
    AllowOverride None
</Directory>
<VirtualHost $ip:$port_rainx>
    # DO NOT REMOVE (even if empty) !
</VirtualHost>

Reload gridinit

[root@singlenode conf]# gridinit_cmd reload

Check status with

[root@singlenode conf]# redc-cluster $namespace

VNS agent

VNS agent is in charge of collecting information from directory services and compute disk space consumption per virtual namespace. One agent is sufficient per namespace.

Create a service file for vnsagent in /GRID/$singlenode/conf/gridinit.conf.d/vns-agent-1:

[Service.$namespace-vns-agent-1]
command=/usr/local/bin/gridd /GRID/$namespace/$singlenode/conf/vns-agent-1.conf /GRID/$namespace/$singlenode/conf/vns-agent-1.log4crc
enabled=true
start_at_boot=yes
on_die=respawn
group=$namespace,$singlenode,$singlenode,vns-agent
env.PATH=/usr/sbin:/usr/bin:/install_root/usr/bin:/usr/local/sbin:/usr/local/bin

Create vnsagent configuration file in /GRID/$namespace/$singlenode/conf/vns-agent-1.conf:

[General]
daemon=false
to_op=300000
to_cnx=300000
pidfile=/GRID/$namespace/run/vns-agent-1.pid

[Service]
namespace=$namespace
type=vns_agent
register=true
stat=

[Server.vns_agent]
min_workers=30
min_spare_workers=10
max_spare_workers=200
max_workers=200
listen=$ip:$port_vnsagent
plugins=vns_agent,stats,fallback

[Plugin.stats]
path=/usr/local/lib64/grid/msg_stats.so

[Plugin.fallback]
path=/usr/local/lib64/grid/msg_fallback.so

[Plugin.vns_agent]
path=/usr/local/lib64/grid/msg_vns_agent.so
# Modify agent frequency to your need
param_space_used_refresh_rate=61

And log4c configuration in /GRID/$namespace/$singlenode/conf/vns-agent-1.log4crc:

<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE log4c SYSTEM "">
<log4c>
    <config>
        <bufsize>2048</bufsize>
        <debug level="4"/>
        <nocleanup>0</nocleanup>
    </config>
    <category name="root" priority="info" appender="RC,$namespace,$singlenode,vnsagent-1"/>
    <appender name="RC,$namespace,$singlenode,vnsagent-1" type="syslog" layout="basic_r"/>
    <layout name="basic_r" type="basic_r"/>
</log4c>

Configure conscience

Edit conscience configuration file (/GRID/$namespace/$singlenode/conf/conscience-1.conf), and add the following rules for score calculation in “[Plugin.conscience]” section.

param_service.meta1.score_timeout=300
param_service.meta1.score_variation_bound=5
param_service.meta1.score_expr=root(2,((num stat.cpu) * (num stat.req_idle)))
param_service.meta2.score_timeout=300
param_service.meta2.score_variation_bound=5
param_service.meta2.score_expr=((num stat.space)>=5) * root(3,((num stat.cpu)*(num stat.req_idle)*(num stat.space)))
param_service.rawx.score_timeout=300
param_service.rawx.score_variation_bound=5
param_service.rawx.score_expr=((num stat.space)>=3) * root(2,((num stat.cpu)*(num stat.space)))
param_service.sqlx.score_timeout=300
param_service.sqlx.score_variation_bound=5
param_service.sqlx.score_expr=((num stat.space)>=3) * root(2,((num stat.cpu)*(num stat.space)))
param_service.rainx.score_timeout=300
param_service.rainx.score_variation_bound=5

Complete with the service allocation policy when a service is linked to a container. Please refer to the Administration Guide for customization.

# NONE|KEEP: no policy
# APPEND : add a reference
# REPLACE: replace the reference if it exists
# (NONE|APPEND|REPLACE|KEEP)|REPLICA|DISTANCE|FILTER
param_option.service_update_policy.meta1=meta2=NONE|3|1;sqlx=NONE|3|1

Complete the section with the versioning policy directive.

# Default versioning policy (allow versioning and keep maximum 5 versions)
param_option.meta2_max_versions=5

Complete the section with the storage policy directive. This installation guide will implement three different storage policies.

param_storage_conf=/GRID/$namespace/$singlenode/conf/conscience-1.storage
# Default namespace policy
param_option.storage_policy=TWOCOPIES

Create the storage policy definition file in /GRID/$namespace/$singlenode/conf/conscience-1.storage

[STORAGE_POLICY]
# Format: NAMEOFPOLICY=STORAGECLASS:NAMEOFDATASECURITY:DATATREATMENT
# STORAGECLASS is a free label
# matching your storage device type
# DATATREATMENT is not yet implemented# Default namespace policy is TWOCOPIES
# Another one is available, called NONE
TWOCOPIES=DUMMY:DUPONETWO:NONE
NONE=DUMMY:NONE:NONE
RAID5=DUMMY:RAIN51:NONE
[DATA_SECURITY]
# Format: NAMEOFDATASECURITY=ALGORITHM:DISTANCE:NUMBEROFDUPLICATE
# ALGORITHM=DUP or NONE
# Datasecurity DUPONETWO requires each chunk to be stored twice, with a minimum distance of 1
DUPONETWO=DUP:distance=1|nb_copy=2
# Datasecurity NONE requires each chunk to be stored only once
NONE=DUP:distance=0|nb_copy=1
# Datasecurity RAIN42 requires 4 chunks and 2 parity. Distance=1 by default
# can be overriden
RAIN51=RAIN:algo=crs|k=5|m=1

Restart conscience. Find its name in gridinit

[root@singlenode gridinit.conf.d]# gridinit_cmd status
KEY                            STATUS      PID GROUP
gridagent                      UP         3571 common,common,singlenode,gridagent
TESTNS-singlenode-conscience-1 UP         3596 TESTNS,singlenode,singlenode,conscience
...

Then restart

[root@singlenode gridinit.conf.d]# gridinit_cmd restart TESTNS-singlenode-conscience-1
DONE            TESTNS-singlenode-conscience-1  Success

Display status and verify the default policy

[root@singlenode17 conf]# redc-cluster $namespace
NAMESPACE INFORMATION
                Name : TESTNS
          Chunk size : 10485760 bytes
              Option : meta2_max_versions = 5
              Option : storage_policy = TWOCOPIES
              Option : container_max_size = 8192
              Option : writable_vns = TESTNS.Virtual2,TESTNS.Virtual1,TESTNS
              Option : vns_list = TESTNS.Virtual1,TESTNS.Virtual2
      Storage Policy : TWOCOPIES = DUMMY:DUPONETWO:NONE
      Storage Policy : RAID6 = DUMMY:RAIN42:NONE
       Data Security : RAIN51 = RAIN:k=5|m=1|algo=crs
       Data Security : NONE = DUP:0:1
       Data Security : DUPONETWO = DUP:1:2
(…)

Command line

Content management in Meta2v2.

redc command is the new command line interface for content management in Redcurrant 1.8.

Create container

[root@localhost vagrant]# redc put $namespace/container-test
Container [container-test] created in namespace [TESTNS].

In order to force a versioning policy different from the namespace default (meta2_max_versions in conscience) use the –O ActivateVersioning option:

  • -1 to create a non-versioned container
  • 0 to create a versioned container with an unlimited number of versions
  • N to create a versioned container with a limit of n versions
[root@localhost vagrant]# redc delete $namespace/container-test
[root@localhost vagrant]# redc put  -O ActivateVersioning=3 $namespace/container-test
Container [container-test] created in namespace [TESTNS].

In order to force a storage policy different from the namespace default , use the –O StoragePolicy option with a valid storage policy name

[root@localhost vagrant]# redc delete $namespace/container-test
[root@localhost vagrant]# redc put -O StoragePolicy=TWOCOPIES $namespace/container-test
Container [container-test] created in namespace [TESTNS].

Store content

[root@localhost ~]# redc put $namespace/container-test test.txt
Uploaded a new version of content [test.txt] in container [container-test]

In a container without versioning, uploading the same content will generate an error

[root@localhost ~]# redc put $namespace/container-test test.txt
ERROR : Content [test.txt] alreay exists in container [container-test]

In a container with versioning, the new version of the content will be uploaded

[root@localhost ~]# redc put $namespace/container-test test.txt
Uploaded a new version of content [test.txt] in container [container-test]

List content

[root@localhost ~]# redc get $namespace/container-test
#Listing container=[container-test]
test.txt
#Total in [container-test] : 1 elements

In a container with versioning, use –O ShowInfo=on in order to display versions (and size)

[root@localhost ~]# redc get -O ShowInfo=on $namespace/container-test
#Listing container=[container-test]
9095 1 test.txt
9095 2 test.txt
#Total in [container-test] : 2 elements

First column is the content size. Second column is the version number.

Get content

[root@localhost ~]# redc get $namespace/container-test/test.txt /tmp/test.txt

By default, redc get will output content to stdout. Specify a filename (/tmp/test.txt) to store content in a file

In order to retrieve a specific version of a content in a versioning enabled container, specify the version in the request URL. By default, redc get will always retrieve the latest version.

[root@localhost ~]# redc get $namespace/container-test/test.txt?version=1 /tmp/test.txt

Please note that if the target file exists, redc get will not process the request

Delete content

[root@localhost ~]# redc delete $namespace/container-test/test.txt

In a versioning enabled container, redc delete will not physically delete content, rather mark them as DELETED for future deletion. Also note that redc delete will delete the latest version if not specified in the URL

If the last version of a content is (marked as) deleted, it will not appear in the basic list content command

[root@localhost ~]# redc get $namespace/container-test
#Listing container=[container-test]
#Total in [container-test] : 0 elements

Nevertheless, in a versioning enabled container, older version might still be available

[root@localhost ~]# redc get -O ShowInfo=on $namespace/container-test
#Listing container=[container-test]
9095 1 test.txt
9095 2 test.txt (deleted)
#Total in [container-test] : 2 elements

Destroy container

It is not possible to delete a non empty container. In a versioning enabled container, each version of each content must be (marked as) deleted, as described in the following sequence

[root@localhost ~]# redc get -O ShowInfo=on $namespace/container-test
#Listing container=[container-test]
9095 1 test.txt
9095 2 test.txt (deleted)
#Total in [container-test] : 2 elements
[root@localhost ~]# redc delete $namespace/container-test
ERROR : Container [container-test] not empty [TESTNS].
(Run action with -v option for technical details.)
[root@localhost ~]# redc delete $namespace/container-test/test.txt?version=1
[root@localhost ~]# redc get -O ShowInfo=on $namespace/container-test
#Listing container=[container-test]
9095 2 test.txt (deleted)
9095 1 test.txt (deleted)
#Total in [container-test] : 2 elements
[root@localhost ~]# redc delete $namespace/container-test

Service management

List services assigned to a container

[root@singlenode ~]# redc-dir list $namespace/container-test meta2
Reference [container-test], services [meta2] linked:
  [1|meta2|10.34.146.70:6021|]
  [1|meta2|10.34.146.70:6022|]
  [1|meta2|10.34.146.70:6023|]

Display rawx location

[root@singlenode ~]# redc-cluster -r $namespace | grep rawx
TESTNS|rawx|192.168.244.128:6008|score=91|tag.up=true|tag.loc=seclin.bureau119.pc1|tag.vol=/DATA/TESTNS/singlenode17/vol02
TESTNS|rawx|192.168.244.128:6007|score=91|tag.up=true|tag.loc=seclin.bureau119.pc1|tag.vol=/DATA/TESTNS/singlenode17/vol01

Assign a sqlx to a container

The database schema must be known before creating the database. In the following example, schema will be named test

[root@singlenode17 conf]# redc-dir link $namespace/container-test sqlx.test
Service [1|sqlx.test|192.168.244.128:6016|] linked to reference [container-test]

Verify that the container has been assigned the appropriate number of sqlx service, as defined in the conscience. Here we require 3 sqlx

[root@singlenode17 conf]# redc-dir list $namespace/container-test sqlx.test
Reference [container-test], services [sqlx.test] linked:
   [1|sqlx.test|192.168.244.128:6016|]
   [1|sqlx.test|192.168.244.128:6014|]
   [1|sqlx.test|192.168.244.128:6017|]

Test a sqlx service

[root@singlenode17 conf]# redc-sqlx $TESTNS/container-test test 'select * from sqlite_master'
# query = "select * from sqlite_master"
# rows = 2
# status = 0
table|admin|admin|2|CREATE TABLE admin (k TEXT PRIMARY KEY NOT NULL, v BLOB DEFAULT NULL)
index|sqlite_autoindex_admin_1|admin|3|(nil)

Unassign a SQLx from a container

[root@singlenode17 conf]# redc-dir unlink $namespace/container-test sqlx.test
Services [sqlx.test] unlinked from reference [container-test]

Troubleshoot

Score

Service is considered unavailable if its score equals 0. It happens that score can be temporarily set to 0 in a local instance. To unlock score:

  [root@singlenode gridinit.conf.d]# redc-cluster --unlock-score -S "$namespace|$service|$ip:$port"

Score can be forced to a specific value for debug purpose. Here score is forced to 100

  [root@singlenode gridinit.conf.d]# redc-cluster --set-score 100 -S "$namespace|$service|$ip:$port"

Permissions

Be sure that each directory under /GRID and /DATA are set to admgrid:admgrid. If not, change permissions recursively. Selinux can prevent rsyslog from working properly. Disable it.

Log files

ProcessLog
Meta0/GRID/$namespace/logs/meta0.{log,access}
Meta1/GRID/$namespace/logs/meta1.{log,access}
Conscience/GRID/$namespace/logs/conscience.{log,access}
Meta2/GRID/$namespace/logs/meta2.{log,access}
Rawx/GRID/$namespace/$singlenode/logs/volXX-httpd-{access,errors}.log
Sqlx/GRID/$namespace/logs/sqlx.{log,access}
Rainx/GRID/$namespace/$singlenode/logs/rainx-X-httpd-{access,errors}.log

Multi node installation

This chapter will only provide advices on how to install Redcurrant on several nodes.

Advised distribution

ProcessHow manyWhy Comment
Conscience2Must be HA, Service is Stateless Active/Passive failover with LVS
Zookeeper3 at leastQuorum is N/2+1 Zookeeper is self replicated
GridinitOn every nodeExcept zookeeper only node
GridagentOn every nodeExcept zookeeper only node
Meta03Must be HA, Meta0 is self replicated
Meta13 at leastMust be HA, Meta1 is self replicated
Meta2As much as possibleI/O intensive. Use high performance storage
RawxAs much as possibleSpace greedy. Use high capacity storage
SqlxAs much as possibleI/O intensive. SQLx is self replicated
RainxOn every nodeCPU Stateless service

User Tools