Saturday, June 25, 2011

Running Crashplan on the Excito B3 server

I was lucky enough to actually WIN an Excito B3 server a few months ago. *big smile* I'm using the little box for all kind of things like SVN server, SSH and VPN tunneling, personal cloud, wiki, etc...

Some time ago I also got a subscription to Crashplan+, so I could backup the familiy pictures to the Crashplan Central servers. Up until now, I always ran their backup software from my main desktop. So the desktop was on many nights in order to backup the 50+ GB pictures to the Crashplan servers over my slow internet connection. I didn't want to do it during the day while I was working on the desktop, to not saturate the bandwidth.

I wasn't happy with this situation. So in order to automate the backups completely I decided to try to run the Crashplan software directly from the B3 server. Because it is always on anyway and idle 99.5% of the time.

I encountered a few problems along the way. I will explain my working solution.

First of al, the B3 server is headless, it has no screen attached. If you get the box from Excito and plug it into your network, all configuration is to be done through a webinterface.

So first thing, I enabled SSH login for my the user, using the B3 webinterface.

Second, the Crashplan software is Java based, so we need a Java Virtual machine on the B3 in order to get it running. Just change to root privileges on the B3 (the B3 runs Debian) and install the following packages:

su #use root password of your B3
apt-get install openjdk-6-jdk
apt-get install libjna-java

Next, reading up on the headless installation howto on the crashplan support page, it looked easy enough. I got the installation files for Linux, unpacked it on the B3, changed to root privileges and started the installer.

wget http://download.crashplan.com/installs/linux/install/CrashPlan/CrashPlan_3.0.3_Linux.tgz
tar zxvf CrashPlan_3.0.3_Linux.tgz
cd CrashPlan-install
su #use root password of your B3
./install.sh

Now, the Crashplan software consists of 2 parts, the engine, and the client. The idea is that you run the engine headless on the B3, and connect to the engine using the client on a computer with an attached screen / GUI. In order to connect to that engine from your computer, you need to trick the client into thinking it connects to a local port. That local connection will be forwarded to a port on the B3 server over an SSH tunnel.

You can read all about setting it up on the Crashplan support site.

Right, time to start the backups... I had hoped. But I was wrong. When connecting from the computer to the B3 with the Crashplan client, I kept giving me "connection failed" messages.

/usr/local/crashplan/bin# channel 3: open failed: connect failed: Connection refused

I went over to the B3 to see what ports were listening for connections. But the 4243, the default Crashplan engine port to connect to from the client, was not in the list.

netstat -an | grep LISTEN

I read the Crashplan logs and encountered an issue... The libjtux.so (http://basepath.com/aup/jtux/) package driver that is included with the Crashplan installer was compiled for a 32 bit Intel system, but that won't run on the ARM cpu that is in the B3 server.

The location of the Crashplan logfile in my case is /usr/local/crashplan/log/engine_error.log
It contained the message:

java.lang.UnsatisfiedLinkError: /usr/local/crashplan/libjtux.so: /usr/local/crashplan/libjtux.so: cannot open shared obj
ect file: No such file or directory (Possible cause: can't load IA 32-bit .so on a ARM-bit platform)



Next on the list was to recompile libjtux.so for the ARM platform and replace the one from Crashplan with our own ARM compiled version. The source for jtux I got from this location: The jtux.tar.gz

If you extract it, you will see there is no "makefile" present. You will need to patch the source code with a file you can get from here: jtux.PS3-YDL6.1.patch.txt

First you install the tools you need to compile and patch the jtux on your B3 server.

su #use root password of your B3
apt-get install gcc
apt-get install patch

In order to compile the code, extract the downloaded file and apply the patch. This will fix a few bugs and the makefile will be present so you can compile the code.

tar zxvf jtux.tar.gz
cd #to your extracted directory
patch <jtux.PS3-YDL6.1.patch.txt

Now edit the makefile and modify first line to the java include files to point to: /usr/lib/jvm/java-1.6.0-openjdk/include

Compile it by running "make" and you should have a libjtux.so that should work.

make

Backup and replace the libjtux.so from your original crashplan installation folder with the one you just compiled.

mv /usr/local/crashplan/libjtux.so /usr/local/crashplan/libjtux.so_original
cp libjtux.so /usr/local/crashplan/libjtux.so

At this point the crashplan engine will start but wont be able to backup anything due to jna errors that will show up in one of the logfiles.

To fix this, just edit /usr/local/crashplan/bin/CrashPlanEngine and find the line that begins with FULL_CP=

Just add "/usr/share/java/jna.jar:" (without ") to the begining of the line after the "=".

When this is done crashplan engine should startup fine. It almost took 4 minutes to get the engine started on my B3. During that time the cpu was constantly running at 100%.


Startup crashplan with one of the following commands with root privileges:

/etc/init.d/crashplan start
/etc/init.d/crashplan restart
/usr/local/crashplan/bin/CrashPlanEngine start

It's running fine now for me. I got it working thanks to this forum post.

If things are not working out... I have all the files mentioned in this post in source and compiled versions, backed up at Crashplan ;-). I'll be happy to send them to you or help you out.


EDIT: Somebody asked me about the ARM compiled version of the /usr/local/crashplan/libjtux.so file. Here it is.

Friday, March 19, 2010

Ubuntu Wubi install fails to boot after kernel update

On the laptop I use at work, like in most big corporates, there's a Windows environment installed. Messing around with creating and deleting partitions is not a good idea when carrying ongoing projects around. So the best option for me to safely install a Linux system on that laptop was to use Wubi.

Installation went without a glitch, the setup ran fine for months. I installed Ubuntu 9.10 running on a 12GB image file on my NTFS partition. Sure I loose a little disc speed compared to an install on a dedicated partition, but not so much that it becomes annoying.

But then there was that kernel update, and the reboot, and within seconds I was looking at the GRUB prompt. Hrmmm, not good. Now what?

Fixing this problem was not so hard to do (I found a solution on the Ubuntu forums). At the GRUB prompt start typing the following (the lines starting with # are comments obviously):

# load the ntfs module
insmod ntfs
#set root depending on where you have wubi installed typically its (hd0,1) for the Windows c-drive
set root(hd0,3)
loopback loop0 /ubuntu/disks/root.disk
set root=(loop0)
#set the kernel, you can complete the "..." by pressing the "tab" key and pick your kernel
# /dev/sda3 is where my root is installed, also here typically use sda1 for the c-drive
linux /boot/vmlinuz... root=/dev/sda3 loop=/ubuntu/disks/root.disk ro
#set the initrd image, you can complete the "..." by pressing the "tab" key
initrd /boot/initrd/initrd.img...
#now boot the environment you just setup
boot

After entering the "boot" command, hopefully, you'll see your system waking up again from it's GRUB hibernation. After login redo the grup setup/configuration by executing the following command, so next time you see your old trusty Grub boot menu again:

sudo update-grub

It took me about half an hour to get my system running again.

Now, after reading up on it, it seems that the above solution might not be a permanent one. It could happen again on the next kernel update. Check out this wiki post: Wubi 9.10 boot problems

Tuesday, March 16, 2010

File system corruption

Normally when I power up my headless Ubuntu server, I wait 1 minute for it to boot and I'm in business. I have access to my files, SVN repository and backups. But not last week. I waited for 15 minutes... nothing happened.

Using SSH to connect to the box was not working. So I hooked up a monitor to the machine and found out it gave me errors that I figured out had to do with ext3 file system corruption. Ohhhh hell.

But because this machine is used as backup server and development environment, luckily, this issue is not the end of the world for me, because I still have 95% of the data that's on that server on other machines. It's just a time consuming job to get everything back and centralized again.

So the terminal greeted me with the "Enter root password or press Ctrl-D" prompt. I was clueless. I read a million different forum threads on using the fsck tool with a mix of command line options. I tried one fsck command that looked safe to execute, tried another, but after it ran the system was still in the same state.

Hrmmms... the next thing I tried was pull out the disc from the Ubuntu server and mount it into a Windows box. I installed the Ext2 FS driver, rebooted and *big smile*... the drive showed up and I was able to recover the data that was unique to the crashed system.

After getting the disc back in the machine and doing a full format of the drivers in the server everything looked fine. I reinstalled the machine with Debian 5.0.4 instead of Ubuntu and only got Samba going again for now, because that was most urgent. After that, I spent a few evenings backing up my data again.

So I now feel a little less secure with that backup server. But only because I still have no idea what's the best way to recover from an ext3 file system corruption. I know I succeeded using a Windows environment to recover parts of it, but still, I should be able to do it using that "Enter root password or press Ctrl-D" prompt.

Note: Trying to be prepare better if it ever happens again, I today Googled some more about the subject and I stumbled on a free Windows tool called R-Linux , that some other guy used to recover his data from an ext3 partition. But still no new on a pure Linux based solution.

Monday, December 14, 2009

Home Server Setup Part 3: Static IP address and Wake On LAN

After the Ubuntu server install, your network interface is configured by default to use DHCP. An IP address will be dynamically assigned to your server by your router for example. I like my server to have at least a static IP address, so I'm sure where to connect to in case I need to access it.

Changing from a DHCP to static setup is easy:

Open the network interface configuration file:

sudo nano /etc/network/interfaces

Depending on the number of network interfaces in your machine, this file will have some sections that start with eth0 or eth1. "eth0" Refers to your primary network interface. My default config file looked like this for eth0:

auto eth0
iface eth0 inet dhcp

This is the configuration for a dynamic (DHCP) configuration. To change your eth0 to have a static ip address assigned to it, change the config to look like this:

auto eth0
iface eth0 inet static
address 192.168.0.2
netmask 255.255.255.0
network 192.168.0.0
broadcast 192.168.0.255
gateway 192.168.0.1

The IP address of my server is 192.168.0.2, the one on my router (the gateway) is 192.168.0.1. I've read somewhere that you must also change the IP that is configured for your nameserver in the /etc/resolv.conf file. But my nameserver configuration was set to my router (192.168.0.1) and my router has nameservers configured. It worked for me this way, so I didn't touch it. You can check if your server's namerserver configuration is working by pinging a domain "ping somedomain.com" (after restarting the network).

Finally, restart the network to let the changes come into effect.
sudo /etc/init.d/networking restart

Setting up Wake On LAN.

My server is located in the garage, tucked away out of sight. As I mentioned in earlier posts, I want to have this server as energy friendly as possible. I don't like paying for many idle CPU cycles on my electricity bill. So I turn on the server only when I need it during the day or evening. To avoid walking to the garage each time I need the server, I made sure I bought a motherboard that supported Wake On LAN (WOL). This way I can boot the server over the network from a client machine.

The configuration of WOL on Ubuntu is fairly easy.

First figure out on what network interface (eth0, eth1, ...) you need to enable WOL. You can have an overview of your network interfaces if you type the "ifconfig" command in your console:


eth0      Link encap:Ethernet  HWaddr 00:11:22:33:44:55
          inet addr:192.168.0.2  Bcast:192.168.0.255  Mask:255.255.255.0
          ....

In my case, I wanted WOL on eth0. Please notice also the "HWaddr" in the console output. the 00:11:22:33:44:55 value is the "MAC address" of your network interface card. This you will need later on to actually "wake up" your server from a client machine.

Good, now let's add the wakeonlan configuration file to the Ubuntu 8.04.1 system:

cd /etc/init.d
sudo nano wakeonlanconfig

Add the following lines to it and save the file.

#!/bin/bash
ethtool -s eth0 wol g
exit

After the file is created, make sure it has the right permissions from the /etc/init.d directory:

sudo chmod a+x wakeonlanconfig

Make sure that the script runs on startup of your machine:

update-rc.d -f wakeonlanconfig defaults

In my case that was it, WOL is configured now.

I always shutdown my server using the following command:

sudo halt

**Note** in order for this to work, you possibly need to configure your BIOS settings to enable WOL for your network device. Check the manual of your motherboard to figure out the settings.

Now that your server is shutdown, on your client machine, in my case a Windows box, you must make use of the MAC address (see above 00:11:22:33:44:55) of your server network interface in order to send the server it's wakeup call.

I can't really recommend a specific tool for that (I use a simple freeware from Depicus), but if you search Google for "WOL" and/or "magic packet sender" it will give you a list of possibilities. All tools come down to one thing: filling in the MAC address and/or IP address of your server and firing the magic packets of over the network. Depending on the boot-up time of your server, it will be available a few minutes later.

Thursday, November 5, 2009

Home Server Setup Part 2: Samba File Server Setup

One of the main reasons I wanted a dedicated home server is to use it as a backup and file server. I work on a laptop and a desktop and keeping all my data in sync, accessible and backed-up for me is the biggest challenge of working on multiple machines. I used to play around with burning DVD's and copying files to external USB drives but I got fed up with it and tried to tackle that problem.

File server
I've heard a lot about Samba but I know only the basics. As I understand, it is a software package that can be used to share directories over the network and act as a print server between Microsoft Windows systems and Unix-like systems. So Samba will let me setup network shares for chosen directories on the server. These shares can appear to a Microsoft Windows system as normal Windows folders accessible via the network. On my Linux box I can also just mount those shares to become part of the file system. Perfect.

Configuration
Note that this configuration was done on Ubuntu Server 8.04.1

First get the Samba software from your local apt mirror:

sudo apt-get install samba

After installation and before you start messing around, make a copy of the Samba configuration file:

sudo cp /etc/samba/smb.conf /etc/samba/smb.conf.original

Next, I will try to explain how I configured my Samba setup. Mind you I'm no expert at all in this matter, but I managed to get a configuration working that suits my needs.

The way I wanted it to work is: giving logged in Windows users access to shared resources on the server. Using the same username and password combination on the Windows machine as on the server. Authentication to the shared drives is handled in the background automatically based on the currently logged in Windows user that is accessing the shared resource.

On the XSERVER, which is my server's host name, I created a directory called "data", this is the root path for all data I want to store on the server, through Samba.

Make sure that the directories are owned by the "users" group. You can change the group of a directory with the following command:

sudo chgrp users /data

Under data, I created directories to organize the data by type like PICTURES, DOCUMENTS, MUSIC, etc...

Fire up your editor (I prefer nano) to change the Samba configuration file:

sudo nano /etc/samba/smb.conf

In the minimal Samba configuration below, you see values for the workgroup, machine name, user based security settings, ... I put this configuration together using the documentation that is provided on the Samba website.

In my configuration you can see 2 shares, pictures and documents. 2 Users (jkl and sara) are allowed to access these shares.

The mask directives determine the local file permissions for any new files or directories that are created in any of the shares.

[global]
workgroup = MATRIX
netbios name = XSERVER
server string = Samba Server %v
log file = /var/log/samba/log.%m
max log size = 1000
socket options = TCP_NODELAY IPTOS_LOWDELAY 
preferred master = No
local master = No
dns proxy = No
security = User

[pictures]
path = /data/PICTURES
valid users = jkl, sara
browsable = yes
read only = no
create mask = 0775
directory mask = 0775

[documents]
path = /data/DOCUMENTS
valid users = jkl, sara
browsable = yes
read only = no
create mask = 0775
directory mask = 0775


Now all that remains on the Samba server is creating the user accounts that will access the Samba shares from Windows clients. I configured exactly the same username/password combinations on the Samba server as used on the Windows clients.

Create the "sara" user in Linux:

sudo useradd -c "Sara" sara

Add the new user to the "users" group:

sudo usermod -a -G users sara

Add samba user and samba password to the local Samba password file:

sudo smbpasswd -L -a sara

Enable the user:

sudo smbpasswd -L -e sara


Finally after all configuration is done, restart the Samba server to enable the configuration changes:

sudo /etc/init.d/samba restart


Now on your Windows client machine (I use Windows XP), in order to use load the Samba network shares as a driveletter you can issue the following command:

net use X: \\XSERVER\documents
net use Y: \\XSERVER\pictures

After executing these commands and if the username/password combination of your Windows client match the Samba server's users, you can use the X and Y drive on your Windows client to store your precious data.

Conclusion
After a few hours of trial and error, the Samba setup was completed. This was my first Samba setup, so I'm happy to hear from anybody what I could have done better. But keep in mind all I want is a simple home setup, no highly secured over the internet corporate network storage stuff. For the moment, I'm very happy with my current configuration.

Over the gigabit network at home, I have no noticeable delays in handling and editing documents. Sometimes there is a small delay when flipping through a directory of 5 MB JPG images from the digital camera, but not so much that it becomes an issue for me.

When copying large file over the network to the Samba server (digital camera movies for example), the transfer can reach speeds of upto 70 MB/second. That is definitely fast enough to copy multi gigabyte files around.

Wednesday, August 19, 2009

Home Server Setup Part 1: Choosing the OS

Which OS is best for my small home server?

It was instantly clear to me: it had to be Linux. To be honest I didn't give other OS's much consideration. My reasons for choosing Linux were clear:
  • I didn't want to pay for a license costing as much as the hardware the OS runs on
  • the server had to run headless, without a screen, without keyboard or mouse, saving energy costs and space
  • the server had to be controlled remotely
  • because I'm no expert, it had to be an OS with a good community backing it up so I could fix problems easily
  • I had to have access to easy to install and configure services with good documentation
Now, I'm writing this article after I have configured the server and it is running smoothly. But there were a few occasions that "easy to install and configure" fired back at me and I almost wished I had the Windows point & click configurations at my disposal. But up-to now, I have not had a problem that patience, Google and the Ubuntu forums couldn't help me solve.

What distribution to choose?

Because I already use the Ubuntu desktop for a few years, I went ahead with Ubuntu 8.04 LTS Server. I didn't need the bleeding edge, latest and greatest of the installed services. I just wanted a stable system for a few years. The Ubuntu LTS variant is being maintained for 5 years, so it was the perfect choice for my needs.

Friday, March 20, 2009

Home Server Hardware Choice

For some time I run around with the idea to build a small server to use at home. At start, it would be used as a file server (pictures, home movies, documents, email), Java application server (Tomcat), source code control system (Subversion), backup station and an interface to the home domotica system.

It has to be energy efficient, and more or less future proof as I don't intend to buy new hardware each year.

Now, it took me quite some time to pick out the hardware, but eventually it is this setup that is now running perfectly silent in the garage.

Case: Antec Sonata (old case I had lying around)
Motherboard: Asus P5QL-EM iG43. The onboard video saves some power compared to using an add-in card. It has 6 sata2 connectors, so there's plenty of room to add discs.
CPU: Intel Pentium Dual E5200. Gives me more than enough number crunching power to use at home.
Memory: Kingston 2x1GB DDR2 SDRAM PC6400. I know, way too much memory for this small server, but it was dirt cheap.
Power supply: Seasonic S12II-380 380W. Quiet PSU with good reviews allround.
System drive: WD Caviar RE WD3200YS (spare drive used as system disc).
Storage: WD Caviar Green 1 TB ( WD10EACS ).

At first I wanted a raid configuration, because I hate loosing data because of disc failure.(Noooo really?) But after giving it some thoughts, I decided I didn't need RAID. It only adds to the power usage and overall cost of the system.
* You have 2, 3 or more discs spinning, that eat away power instead of just 1
* The initial costs would be a lot higher since I had to buy more discs.
* When done right, RAID performance could be faster than a single disc, but for my family needs, a single drive easily keeps up. I get between 30-60 MByte/s over the Gbit network, depending on the file size.

Instead of relying on raid to tackle the disc failure problem, I now do a weekly rsync of my files from the server to my desktop. This is good as long as my desktop drive has space available. If the desktop runs out of space, I'll probably add an extra drive in the server that only spins up 1x a day to execute the rsync, and weekly hook up an external drive to create a backup.

The system currently draws more or less 45-50W from the socket while idle or just serving files. It needs 65W when booting or under heavy load.

So, is there anything I don't like about this setup?

Sadly yes, first, the power supply is maybe not the best pick. The system only needs 65W at full load, it would have been more energy efficient to buy a 100W power supply for example.

Second the current system drive (an old 3.5 incher), is probably the first item to be replaced by a low power 2.5 inch laptop drive.

The system is currently running Ubuntu Server 8.04.

I'll document the setup of the services and applications it is running in future articles.