pages tagged filesystemNico Schotteliushttps://www.nico.schottelius.org//tags/filesystem/Nico Schotteliusikiwiki2016-02-25T13:34:32ZGluster FOSS development is awesomehttps://www.nico.schottelius.org//blog/glusterfs-foss-development-is-awesome/2016-02-25T13:34:32Z2015-03-07T08:00:20Z
<h2>TL; DR</h2>
<p>Last night I suggested a change to <a href="http://www.gluster.org/">Gluster</a> - after I woke up,
my patch has already been incorporated - that is awesome!</p>
<h2>The experience</h2>
<p>After beginning my journey with <a href="http://www.gluster.org/">Gluster</a> some months ago
and my recent blog about
<a href="https://www.nico.schottelius.org//blog/how-to-access-gluster-from-multiple-networks/">How to access gluster from multiple networks</a>,
I have made a great experience with glusterfs yesterday:</p>
<p>After <a href="http://www.gluster.org/pipermail/gluster-users/2015-March/020945.html">mentioning that I have the same problem</a> as <a href="http://www.gluster.org/pipermail/gluster-users/2015-March/020939.html">何亦</a>
and <a href="http://www.gluster.org/pipermail/gluster-users/2015-March/020948.html">suggesting a patch</a>,
I had this incredible experience:</p>
<ol>
<li>Someone (<a href="https://twitter.com/nixpanic">@nixpanic</a> aka Niels) suggested to prepare it for inclusion</li>
<li>GlusterFS has <a href="http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow">an organic process for development and testing</a> - seeing the patch going through this process gives me the impression someone cares about code quality</li>
<li>After creating the <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1199577">bugreport</a>, the process smoothly started</li>
<li>The patch was reviewed within hours</li>
<li>And finally merged into the master branch as well as <a href="http://www.gluster.org/pipermail/gluster-users/2015-March/020955.html">included for glusterfs 3.6.3</a></li>
</ol>
<p>The overall experience as a FOSS <em>contributor</em> can be described by a single word:</p>
<pre><code>A-W-E-S-O-M-E
</code></pre>
<p>I have contributed to many FOSS projects, but this experience is exceptionally great -
thanks for the help everyone and keep up the good work!</p>
<h2>Follow up</h2>
<p>If you find this article interesting, you may want to stay updated by following
me and ungleich on Twitter:</p>
<ul>
<li><a href="https://twitter.com/ungleich">@ungleich</a></li>
<li><a href="https://twitter.com/NicoSchottelius">@NicoSchottelius</a></li>
</ul>
How to access gluster from multiple networkshttps://www.nico.schottelius.org//blog/how-to-access-gluster-from-multiple-networks/2016-02-25T13:34:32Z2015-02-13T10:34:42Z
<h1>TL;DR</h1>
<p>Create volumes name based instead of IP based:</p>
<pre><code>gluster volume create xfs-plain replica 2 transport vmhost1:/home/gluster vmhost2:/home/gluster
</code></pre>
<p>instead of</p>
<pre><code>gluster volume create xfs-plain replica 2 transport 192.168.0.1:/home/gluster 192.168.0.2:/home/gluster
</code></pre>
<p>And have the names point to different IP addresses.</p>
<h2>The setup</h2>
<p>The basic setup (in our case) looks like this:</p>
<pre><code>---------------------------------
| Clients / Users |
---------------------------------
|
|
--------------------------------- ---------------------------------
| frontend (with opennebula) | ---| vmhost1 with glusterfs |
--------------------------------- / ---------------------------------
| / eth0 eth1
|-------------------------< ||
\ eth0 eth1
\ ---------------------------------
---| vmhost2 with glusterfs |
---------------------------------
</code></pre>
<p>The frontend running <a href="http://www.opennebula.org">Opennebula</a> connects to
<strong>vmhost1</strong> and <strong>vmhost2</strong> using their public interfaces.</p>
<p>The gluster bricks running on the vm hosts are supposed to communicate
via eth1, so that the traffic for <a href="http://www.gluster.org/">Gluster</a> does not influence
the traffic of the virtual machines to the Internet. The gluster filesystem
of the vmhosts is only thought to be used by the virtual machines running
on those two hosts - an isolated cluster. Thus the volume initially has been created
like this:</p>
<pre><code>gluster volume create xfs-plain replica 2 transport 192.168.0.1:/home/gluster 192.168.0.2:/home/gluster
</code></pre>
<h2>The problem</h2>
<p>However, the frontend requires access to the gluster volume, because
<a href="http://www.opennebula.org">Opennebula</a> needs to copy and import the VM image into the gluster
datastore. Even though the <em>glusterd</em> process listens on any IP address,
the volume contains the information that it runs on 192.168.0.1
and 192.168.0.2 and is thus not reachable from the frontend.</p>
<h2>Using name based volumes</h2>
<p>The frontend can reach the vm hosts via <strong>vmhost1</strong> and <strong>vmhost2</strong>,
which resolves to their <strong>public IP addresses</strong> via DNS.</p>
<p>On the vm hosts we created entries in <strong>/etc/hosts</strong> using <a href="http://www.nico.schottelius.org/software/cdist/">cdist</a>
that looks as following:</p>
<pre><code>192.168.0.1 vmhost1
192.168.0.2 vmhost2
</code></pre>
<p>Now we re-created the volume using</p>
<pre><code>gluster volume create xfs-plain replica 2 transport tcp vmhost1:/home/gluster vmhost2:/home/gluster
gluster volume start xfs-plain
</code></pre>
<p>And it correctly shows up in the volume info:</p>
<pre><code>%gluster volume info
Volume Name: xfs-plain
Type: Replicate
Volume ID: fe45c626-c79d-4e67-8f19-77938470f2cf
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: vmhost1-cluster1.place4.ungleich.ch:/home/gluster
Brick2: vmhost2-cluster1.place4.ungleich.ch:/home/gluster
</code></pre>
<p>And now we can mount it successfully on the frontend using</p>
<pre><code>% mount -t glusterfs vmhost2:/xfs-plain /mnt/gluster
</code></pre>
<h2>Follow up</h2>
<p>If you find this article interesting, you may want to stay updated by following
me and ungleich on Twitter:</p>
<ul>
<li><a href="https://twitter.com/ungleich">@ungleich</a></li>
<li><a href="https://twitter.com/NicoSchottelius">@NicoSchottelius</a></li>
</ul>