Fixing time out waiting for LVM2 devices booting CentOS 7

After my last system update I started to experiment troubles in the booting process, my server couldn’t to finish the process with any of kernel images installed on CentOS.
I checked booting with the rescue image which hadn’t problems detecting LVM2 devices but appeared the same time out with RAID device.
Fortunately the rescue image finished properly because /, /home and the remain partitions were correctly mounted, except the partition used by the backup software related with RAID device (previously I removed RAID device entry from /etc/fstab).
I tested changing some kernel parameters at booting time as it was suggested in some web pages, but without success.
A little bit frustrated after a lot of reboot attempts and finally I fixed the problem rebuilding all initial ramdisks using the following command:
# dracut --regenerate-all -fv --mdadmconf --fstab --add=mdraid --add-driver="raid1 raid10 raid456"

Then I entered again the RAID device entry in fstab file, and I rebooted the server and I checked that the booting process works like a charm with all kernel images, included the rescue one.

References:

Good judgement is the result of experience … Experience is the result of bad judgement.
— Fred Brooks

Installing CyanogenMod 12.1 on Nexus 7 (2012 version) (a.k.a. “grouper”)

This entry is just to remember steps that I have followed to install CyanogenMod 12.1 on my Nexus 7 (2012).
This would be the previous point to be able to make my tablet backups using BackupPC (you can check this link Nexus 7 & BackupPC).
The following information is extracted from this guide.
First of all, some packages need to be installed in the PC which is going to be used to connect with your tablet, in my case:
# aptitude install android-tools-adb android-tools-fastboot
These packages contain “adb” and “fastboot” command required to complete device flashing process.

Once tablet is connected to PC and to have enabled USB debugging option and be enabled Developer Options explained here it executes this command to boot in fastboot mode:
$ adb reboot bootloader
$ fastboot devices
XXXXXXX fastboot

XXXXXXX -> Device serial number
The next command is used to unlock the bootloader:
$ fastboot oem unlock
After this point to re-enable USB debugging it’s needed to be able to continue.

Now it’s time to install  custom recovery image, to know more about recovery images visit this page

You need to download the image to be installed, for me it was: https://dl.twrp.me/grouper/twrp-2.8.7.0-grouper.img

  • Connect the device to computer via USB (as we did in an earlier step)
  • Execute these commands:

$ adb reboot bootloader
$ fastboot devices
XXXXXXX  fastboot
$ cd Downloads && fastboot flash recovery twrp-2.8.7.0-grouper.img
sending 'recovery' (11850 KB)...
OKAY [  1.764s]
writing 'recovery'...
OKAY [  0.397s]
finished. total time: 2.160s

  • Reboot the device into recovery just to check the recovery image is correctly installated.

Now let’s continue with CyanogenMod installation from custom recovery mode, the first step would be to download the build package for your tablet. I decided to use a release version because is the stable package version (cm-12.1-20151117-SNAPSHOT-YOG7DAO1KA-grouper.zip).
I also downloaded this file open_gapps-arm-5.1-nano-20160809.zip from here.

  • Place the build package, zip file, on the root of /sdcard/ using this command:

$ cd ~/Downloads && adb push cm-12.1-20151117-SNAPSHOT-YOG7DAO1KA-grouper.zip /sdcard/
$ adb push open_gapps-arm-5.1-nano-20160809.zip

  • Boot to recovery mode (Team Win Recovery Project), for my device is needed to hold Volume Up, Volume Down and the Power button, to be able to flash CyanogenMod image.

It’s highly recommended create a backup before flashing the image zip file. After that, select Wipe and then Factory Reset, then select Install option navigate to /sdcard and select cm-12.1-20151117-SNAPSHOT-YOG7DAO1KA-grouper.zip file, follow the messages printed on screen. To install Open GApps image follow the same process.

Just you need to reboot Nexus 7 and check if CyanogenMod is running properly.

I want to thank to “The CyanogenMod Team”, its Community and “The Open GApps Team” for their effort, thanks guys!

Some acronyms related with this topic:

  1. OEM: Original Equipment Manufacturer
  2. OTA: Open Tools API
  3. USB: Universal Serial Bus

References:

Enjoy!


‘I understand that readers do not need to know these things; but I need to tell them.’
— Rousseau.

Set up RAID 1 (Mirroring) on CentOS 7

This entry explains steps which I followed to set up storage to BackupPC software.

Obviously the right choice to set up storage settings depends on your hardware and your needs. In my case I decided to use my HP ProLiant MicroServer Gen8 G1610T as my backup server and I wanted to be safe enough to avoid data lost due to hardware failures, so I decided set up 2 internal hard disks Western Digital Red as RAID 1 device.

To prevent the mistake which I made I would recommend that you read carefully the specifications from your hardware and even make some checks with dd’s…

After installing CentOS 7 on my HP server I read the following blog entry:

The Gen8 model’s 4 bays are split — Bays 1 and 2 SATA3 6Gbps, while Bays 3 and 4 are SATA2 3Gbps.

Unfortunatelly I discovered these specifications too late when OS was already installed in /dev/sda and the two disks used to RAID device were located in second and third bay, which means different speed rates so RAID device will work to the lowest transfer speed (I guess, I am not measure I/O speed in RAID device).

I haven’t much free time so I decided assume this issue (and not reinstall my server), because after all is a home server, but I learnt an important lesson, you need to know perfectly well your hardware. With all of this the steps I followed were these:

Selecting physical devices:
/dev/sdb
/dev/sdc

Let’s create partitions:
# fdisk /dev/sdb
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Welcome to fdisk (util-linux 2.23.2).
First step create partitiions:
Command (m for help): m
...
o create a new empty DOS partition table
---
Building a new DOS disklabel with disk identifier 0x9f64c2f4.
The device presents a logical sector size that is smaller than
the physical sector size. Aligning to a physical sector (or optimal
I/O) size boundary is recommended, or performance may be impacted.
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-3907029167, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-3907029167, default 3907029167):
Using default value 3907029167
Partition 1 of type Linux and of size 1.8 TiB is set
Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'
Command (m for help): p
Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x9f64c2f4
Device Boot Start End Blocks Id System
/dev/sdb1 2048 3907029167 1953513560 fd Linux raid autodetect

The same previous step to /dev/sdc device. Next step:
# mdadm -E /dev/sd[b-c]
/dev/sdb:
MBR Magic : aa55
Partition[0] : 3907027120 sectors at 2048 (type fd)
/dev/sdc:
MBR Magic : aa55
Partition[0] : 3907027120 sectors at 2048 (type fd)
# mdadm -E /dev/sd[b-c]1
mdadm: No md superblock detected on /dev/sdb1.
mdadm: No md superblock detected on /dev/sdc1.

Create a RAID device:
# mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Check RAID device:
# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Feb 29 23:57:11 2016
Raid Level : raid1
Array Size : 1953382464 (1862.89 GiB 2000.26 GB)
Used Dev Size : 1953382464 (1862.89 GiB 2000.26 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Mon Feb 29 23:58:17 2016
State : clean, resyncing
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Resync Status : 0% complete
Name : g8.acme:0 (local to host g8.acme)
UUID : 20d65619:0ed9ba74:36f94bc0:6fddc56e
Events : 13
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
# mdadm -E /dev/sd[b-c]1
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 20d65619:0ed9ba74:36f94bc0:6fddc56e
Name : g8.acme:0 (local to host g8.acme)
Creation Time : Mon Feb 29 23:57:11 2016
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 3906764976 (1862.89 GiB 2000.26 GB)
Array Size : 1953382464 (1862.89 GiB 2000.26 GB)
Used Dev Size : 3906764928 (1862.89 GiB 2000.26 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=48 sectors
State : active
Device UUID : 4d9290ae:994f8d57:602be8b6:73edb241
Internal Bitmap : 8 sectors from superblock
Update Time : Mon Feb 29 23:59:02 2016
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 7fcc116a - correct
Events : 22
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 20d65619:0ed9ba74:36f94bc0:6fddc56e
Name : g8.acme:0 (local to host g8.acme)
Creation Time : Mon Feb 29 23:57:11 2016
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 3906764976 (1862.89 GiB 2000.26 GB)
Array Size : 1953382464 (1862.89 GiB 2000.26 GB)
Used Dev Size : 3906764928 (1862.89 GiB 2000.26 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=48 sectors
State : active
Device UUID : b0eb0220:52f87062:dcbf7a6a:75028466
Internal Bitmap : 8 sectors from superblock
Update Time : Mon Feb 29 23:59:02 2016
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : d485bd04 - correct
Events : 22
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

Reviewing the RAID configuration:
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
1953382464 blocks super 1.2 [2/2] [UU]
[====>................] resync = 23.3% (456830464/1953382464) finish=193.4min speed=128915K/sec
bitmap: 12/15 pages [48KB], 65536KB chunk
unused devices:

Format the raid devices with journal file system:
# mkfs.ext4 /dev/md0
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
122093568 inodes, 488345616 blocks
24417280 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2636120064
14904 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information:

If you want to determine if a given device is a component device or a raid device you can execute the following commands:
# mdadm --query /dev/md0
/dev/md0: 1862.89GiB raid1 2 devices, 0 spares. Use mdadm --detail for more detail.
# mdadm --query /dev/sdb1
/dev/sdb1: is not an md array
/dev/sdb1: device 0 in 2 device active raid1 /dev/md0. Use mdadm --examine for more detail.

List array lines:
# mdadm --detail --scan
ARRAY /dev/md0 metadata=1.2 name=g8.acme:0 UUID=20d65619:0ed9ba74:36f94bc0:6fddc56e

At this point I decided to set up LVM2 over the RAID device, the choice was made more for curiosity than for a technical reason.

Create Physical Volume using RAID1 array:
# pvcreate /dev/md0
WARNING: ext4 signature detected on /dev/md0 at offset 1080. Wipe it? [y/n]:
WARNING: ext4 signature detected on /dev/md0 at offset 1080. Wipe it? [y/n]: y
Wiping ext4 signature on /dev/md0.
Physical volume "/dev/md0" successfully created

Check Physical volume attributes using pvs:
# pvs
PV VG Fmt Attr PSize PFree
/dev/md0 lvm2 --- 1.82t 1.82t
/dev/sda2 centos lvm2 a-- 424.00g 4.00m

Check Physical Volume information in detail using pvdisplay command:
# pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name centos
PV Size 424.01 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 108545
Free PE 1
Allocated PE 108544
PV UUID YnaiKQ-Yz9Z-UUUN-H9aa-XLRq-AT1m-7y8wqh
"/dev/md0" is a new physical volume of "1.82 TiB"
--- NEW Physical volume ---
PV Name /dev/md0
VG Name
PV Size 1.82 TiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID mf9XlE-QDIs-7Xz3-qCHH-fXok-GclK-yLDfzR

Create volume group named raid1 using vgcreate command:
# vgcreate raid1 /dev/md0
Volume group "raid1" successfully created

See Volume group attributes using vgs command:
# vgs
VG #PV #LV #SN Attr VSize VFree
centos 1 3 0 wz--n- 424.00g 4.00m
raid1 1 0 0 wz--n- 1.82t 1.82t

See volume Group information in detail using vgdisplay:
# vgdisplay
--- Volume group ---
VG Name centos
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 424.00 GiB
PE Size 4.00 MiB
Total PE 108545
Alloc PE / Size 108544 / 424.00 GiB
Free PE / Size 1 / 4.00 MiB
VG UUID ZRYdVb-NmBJ-Z6Mp-NTVo-QklF-Qy7r-OCJRr2
--- Volume group ---
VG Name raid1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.82 TiB
PE Size 4.00 MiB
Total PE 476899
Alloc PE / Size 0 / 0
Free PE / Size 476899 / 1.82 TiB
VG UUID vnJGO0-g8iT-MJTo-wMVh-Zxck-ITRh-jD26H8

Logical Volume Creation using lvcreate.
# lvcreate -L 100G raid1 -n lvm0
Logical volume "lvm0" created.

View the attributes of Logical Volume:
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home centos -wi-ao---- 200.00g
root centos -wi-ao---- 200.00g
swap centos -wi-ao---- 24.00g
lvm0 raid1 -wi-a----- 100.00g

View Logical Volume information in detail:
# lvdisplay
--- Logical volume ---
LV Path /dev/centos/root
LV Name root
VG Name centos
LV UUID RtvhTy-m8Ra-xOJ2-cxPB-ruEK-4jmC-BGA8lB
LV Write Access read/write
LV Creation host, time localhost, 2015-04-17 18:24:27 +0200
LV Status available
# open 1
LV Size 200.00 GiB
Current LE 51200
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Path /dev/centos/home
LV Name home
VG Name centos
LV UUID p4ahvC-3Y0I-yblG-xzC0-6dJI-hDk3-PJuOt8
LV Write Access read/write
LV Creation host, time localhost, 2015-04-17 18:24:31 +0200
LV Status available
# open 1
LV Size 200.00 GiB
Current LE 51200
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
--- Logical volume ---
LV Path /dev/centos/swap
LV Name swap
VG Name centos
LV UUID EphLec-154b-jvIY-4MAf-uAnV-4XYe-GffUKQ
LV Write Access read/write
LV Creation host, time localhost, 2015-04-17 18:24:35 +0200
LV Status available
# open 2
LV Size 24.00 GiB
Current LE 6144
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Path /dev/raid1/lvm0
LV Name lvm0
VG Name raid1
LV UUID xaUN5g-f4Yc-q0jV-NGJ4-lwrR-P0Bs-ZDz8fh
LV Write Access read/write
LV Creation host, time g8.acme, 2016-03-01 01:02:14 +0100
LV Status available
# open 0
LV Size 100.00 GiB
Current LE 25600
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4

Format lvm partition
# mkfs.ext4 /dev/raid1/lvm0
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
6553600 inodes, 26214400 blocks
1310720 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2174746624
800 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

To create mount point:
# mkdir /mnt/raid1

To mount logical volume:
# mount /dev/raid1/lvm0 /mnt/raid1
# lvmdiskscan
/dev/loop0 [ 100.00 GiB]
/dev/md0 [ 1.82 TiB] LVM physical volume
/dev/centos/root [ 200.00 GiB]
/dev/loop1 [ 2.00 GiB]
/dev/sda1 [ 250.00 MiB]
/dev/centos/swap [ 24.00 GiB]
/dev/sda2 [ 424.01 GiB] LVM physical volume
/dev/centos/home [ 200.00 GiB]
/dev/mapper/docker-253:0-11667055-pool [ 100.00 GiB]
/dev/raid1/lvm0 [ 100.00 GiB]
/dev/sdd1 [ 438.50 GiB]
/dev/sdd2 [ 4.02 GiB]
/dev/sdd3 [ 23.07 GiB]
/dev/sdd5 [ 133.32 MiB]
/dev/sdd6 [ 23.50 MiB]
4 disks
9 partitions
0 LVM physical volume whole disks
2 LVM physical volumes

If you want to know about the physical volume in detail along with the drive participated with physical volume you can get all through this file.
# less /etc/lvm/lvm.conf

Edit /etc/fstab to permanent mount:
/dev/raid1/lvm0 /mnt/raid1 ext4 defaults 0 0

If you want to make availability tests you can manually force a fail in a physical device:
# mdadm /dev/md0 --fail /dev/sdb1
# mdadm --detail /dev/md0

Interesting entries that I found investigating for this matter:


“Those who can imagine anything can create the impossible.”
— Alan Turing

How safely free space in /boot

I have to admit that I’m a newbie using CentOS, that was the point because I decided install it on my server to learn from practice. I’m get used to managing my Debian GNU/Linux box and there are things that I have to write just to remember them, that’s the reason for this entry.

Updating the system (yum update) I found the following message:

Transaction check error:
 installing package kernel-3.10.0-327.18.2.el7.x86_64 needs 30MB on the /boot filesystem

Error Summary
 -------------
 Disk Requirements:
 At least 30MB more space needed on the /boot filesystem.

The df command confirms the bad news:
# df -h
Filesystem               Size  Used Avail Use% Mounted on
...
/dev/sda1                239M  235M     0 100% /boot
...

If you have some old kernel versions the easiest way to solve space problem in boot partition in pretty obvious, remove unused old versions, so let’s to list kernel versions installed in our system:
# rpm -qa kernel |sort -V
kernel-3.10.0-229.1.2.el7.x86_64
kernel-3.10.0-229.11.1.el7.x86_64
kernel-3.10.0-229.el7.x86_64
kernel-3.10.0-327.10.1.el7.x86_64

The same information will be got with this:
# yum list installed|grep ^kernel

Now we can use package-cleanup command to remove the old ones, in this case I decided to use --count=2 the leave the latest version and other one just in case.

# package-cleanup --oldkernels --count=2
...
Removed:
kernel.x86_64 0:3.10.0-229.el7         kernel.x86_64 0:3.10.0-229.1.2.el7         kernel-devel.x86_64 0:3.10.0-229.el7         kernel-devel.x86_64 0:3.10.0-229.1.2.el7
Complete!

# df -h
Filesystem Size Used Avail Use% Mounted on
...
/dev/sda1 239M 134M 88M 61% /boot
...

As yum update wanted to install a new kernel version, we’ll have to repeat the same package-cleanup to leave just two kernel versions in our system once system update have finished properly.

“Be yourself; everyone else is already taken.”
-Oscar Wilde

Quick fix to BackupPC start on CentOS 7

A reader had commented that he had a problem when BackupPC service started and some actions needed to be done to fix the service start, this comment was related to this previous entry, if you get this message when the service is started, you could try the following fix:
$ /usr/share/BackupPC/bin/BackupPC_serverMesg status info
Can't connect to server (unix connect: No such file or directory)

First of all is needed to find the script which controls the service:
# systemctl status backuppc
● backuppc.service - SYSV: Starts and stops the BackupPC server
Loaded: loaded (/etc/rc.d/init.d/backuppc; static; vendor preset: disabled)
Active: active (running) since Sun 2016-05-29 21:26:39 CEST; 14min ago
Docs: man:systemd-sysv-generator(8)
Process: 3410 ExecStart=/etc/rc.d/init.d/backuppc start (code=exited, status=0/SUCCESS)

So let’s edit /etc/rc.d/init.d/backuppc script, and we have to add the bold line:
$ cd /etc/rc.d/init.d/
# cp backuppc backuppc.OLD.`date +%Y%m%d`
start() {...
echo -n "Starting BackupPC: "
[ ! -d /var/run/BackupPC ] && mkdir /var/run/BackupPC; chown backuppc.backuppc /var/run/BackupPC;
...

It’s time to restart the service and check its status:

# systemctl stop backuppc
# systemctl start backuppc
$ /usr/share/BackupPC/bin/BackupPC_serverMesg status info
Got reply: %Info = ("ConfigLTime" => "1464549999","poolFileCntRep" => 0,"cpoolFileCntRep" => 0,"DUDailyMaxReset" => 0,"cpoolFileCnt" => 0,"cpoolFileCntRm"...

At this point service starts correctly but, in my case, when server is restarted the service is not automatically up, let’s dig deeper to find what’s going on. If we check if the service is enabled we get this:

# systemctl is-enabled backuppc
static

But what “static” exactly means? From this page (netsuso comment) we could read the following explanation:

Static units are those which cannot be enabled/disabled, but it doesn’t mean they are always executed. They will only if another unit depends on them, or if they are manually started.
Actually, static units are simply those without an [Install] section. As enabling units means just creating a symlink to wherever [Install] mandates, those units without [Install] section cannot be enabled, as systemctl doesn’t know where to place the symlink.
Of course, you can still manually create a symlink from a static unit to (for instance) /etc/systemd/system/multi-user.target.wants/, and it will be executed as any other enabled unit. But I suppose static units are not intended to be enabled in that way, and most probably you shouldn’t need to do it wink

So, what happens if we change service descriptor and we add [Install] section:

# cp /etc/systemd/system/backuppc.service /etc/systemd/system/backuppc.service.OLD.`date +%Y%m%d`
# vim /etc/systemd/system/backuppc.service
...
[Install]
WantedBy = multi-user.target

With this modification we try to enable the static unit:
# systemctl enable backuppc
Created symlink from /etc/systemd/system/multi-user.target.wants/backuppc.service to /etc/systemd/system/backuppc.service.

It’s time to reboot our server and check if unit is automatically started. Now the service should start without any problems.

That’s all folks!

“Simplicity is the ultimate sophistication”
— Leonardo da Vinci

Installing Nike SportWatch Linux Driver

Long time ago I bought a Nike Sportwatch (sportband) and as I use Debian GNU/Linux at home I wanted to manage my device with Linux. I searched for a Linux driver and I found comsport driver so I decided to give it a chance.
Let’s download the driver from here.
$ tar xvzf allcomsport-0.1.tar.gz

Reading the installation file, INSTALL_DRIVER_AND_GUI, I realized that I had to check the Device ID to verify if all data match with the instructions so I plugged my Sportwatch and execute the following command:
$ lsusb
...
Bus 001 Device 010: ID 11ac:5455 Nike
...

  • Bus 001: This is bus number where Nike SportWatch USB is attached.
  • Device 010: This is the tenth device attached to bus 001, there are other devices attached to the same bus.
  • 11ac:5455 is the ID given to this Nike SportWatch. The sequence before “:” indicates the manufacture ID and number after “:” indicates device ID.

The first thing that I noticed was that the Manufacturer ID obviously matched with the instructions (11ac == Nike) but the Device ID it wasn’t the same as my device.
So I check my user groups and change the udev rule according my data:
SUBSYSTEMS=="usb", ATTRS{idVendor}=="11ac", ATTRS{idProduct}=="5455",
SYMLINK+="sport", GROUP="aitor", MODE="0666"

  • SUBSYSTEMS==”usb” -> the kernel subsystem which generated the request
  • ATTRS{idProduct}==”5455″ -> My Device ID
  • GROUP=”aitor” -> your user group
  • SYMLINK+=”sport” -> a symlink /dev/sport will be created pointing to the device

# cp 90-sportband.rules /etc/udev/rules.d/
Reload udev rules:
# udevadm control --reload-rules

It’s time to compile the driver and I found some errors:
$ tar xvzf comsport-0.1.tar.gz
$ mkdir /opt/nike
$ cd comsport/comsport-0.1
$ configure --prefix=/opt/nike
...
checking for intltool >= 0.35.0... ./configure: line 6222: intltool-update: command not found
configure: error: Your intltool is too old. You need intltool 0.35.0 or later.
# aptitude install intltool
# dpkg -l |grep intltool
ii intltool 0.50.2-3 all Utility scripts for internationalizing XML
ii intltool-debian
$ configure --prefix=/opt/nike
...
checking for libusb... configure: error: Package requirements (libusb-1.0) were not met:
No package 'libusb-1.0' found
Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.
Alternatively, you may set the environment variables libusb_CFLAGS
and libusb_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.
# dpkg -l|grep libusb
ii libgusb2:amd64 0.2.8-1 amd64 GLib wrapper around libusb1
ii libusb-0.1-4:amd64 2:0.1.12-28 amd64 userspace USB programming library
ii libusb-1.0-0:amd64 2:1.0.20-1 amd64 userspace USB programming library
ii libusb-1.0-0:i386 2:1.0.20-1 i386 userspace USB programming library
...

So let’s find the libusb library:
$ find /lib -iname "*libusb*"
/lib/x86_64-linux-gnu/libusb-0.1.so.4
/lib/x86_64-linux-gnu/libusb-0.1.so.4.4.4
/lib/x86_64-linux-gnu/libusb-1.0.so.0.1.0
/lib/x86_64-linux-gnu/libusb-1.0.so.0
/lib/i386-linux-gnu/libusb-1.0.so.0.1.0
/lib/i386-linux-gnu/libusb-1.0.so.0

After some tries I found the right settings:
$ libusb_LIBS="-L/lib/x86_64-linux-gnui -lusb" libusb_CFLAGS="-I/lib/x86_64-linux-gnu" ./configure --prefix=/opt/nike

$ make
make all-recursive
make[1]: Entering directory ‘/home/aitor/soft/comsport/comsport-0.1’
Making all in src
make[2]: Entering directory ‘/home/aitor/soft/comsport/comsport-0.1/src’
g++ -DHAVE_CONFIG_H -I. -I.. -DPACKAGE_LOCALE_DIR=\””/opt/nike/share/locale”\” -DPACKAGE_SRC_DIR=\””.”\” -DPACKAGE_DATA_DIR=\””/opt/nike/share”\” -I/lib/x86_64-linux-gnu -g -O2 -MT DeviceCom.o -MD -MP -MF .deps/DeviceCom.Tpo -c -o DeviceCom.o DeviceCom.cc
DeviceCom.cc:14:17: fatal error: usb.h: No such file or directory
compilation terminated.
Makefile:352: recipe for target ‘DeviceCom.o’ failed
make[2]: *** [DeviceCom.o] Error 1
make[2]: Leaving directory ‘/home/aitor/soft/comsport/comsport-0.1/src’
Makefile:386: recipe for target ‘all-recursive’ failed
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory ‘/home/aitor/soft/comsport/comsport-0.1’
Makefile:295: recipe for target ‘all’ failed
make: *** [all] Error 2

To fix this new compiling error is needed to install libusb-dev package in the system:
# aptitude install libusb-dev

Once package was installed I compiled the driver again and everything went perfect:
$ make
make all-recursive
make[1]: Entering directory '/home/aitor/soft/comsport/comsport-0.1'
Making all in src
make[2]: Entering directory '/home/aitor/soft/comsport/comsport-0.1/src'
/bin/bash ../libtool --tag=CXX --mode=link g++ -g -O2 -o comsport main.o DeviceCom.o BufferControl.o Controller.o Profile.o ConvertControl.o -lusb -L/lib/x86_64-linux-gnui -lusb
libtool: link: g++ -g -O2 -o comsport main.o DeviceCom.o BufferControl.o Controller.o Profile.o ConvertControl.o -L/lib/x86_64-linux-gnui -lusb
make[2]: Leaving directory '/home/aitor/soft/comsport/comsport-0.1/src'
Making all in po
make[2]: Entering directory '/home/aitor/soft/comsport/comsport-0.1/po'
make[2]: Nothing to be done for 'all'.
make[2]: Leaving directory '/home/aitor/soft/comsport/comsport-0.1/po'
make[2]: Entering directory '/home/aitor/soft/comsport/comsport-0.1'
make[2]: Leaving directory '/home/aitor/soft/comsport/comsport-0.1'
make[1]: Leaving directory '/home/aitor/soft/comsport/comsport-0.1'

$ make install
Making install in src
make[1]: Entering directory ‘/home/aitor/soft/comsport/comsport-0.1/src’
make[2]: Entering directory ‘/home/aitor/soft/comsport/comsport-0.1/src’
test -z “/opt/nike/bin” || /bin/mkdir -p “/opt/nike/bin”
/bin/bash ../libtool –mode=install /usr/bin/install -c comsport ‘/opt/nike/bin’
libtool: install: /usr/bin/install -c comsport /opt/nike/bin/comsport
make[2]: Nothing to be done for ‘install-data-am’.
make[2]: Leaving directory ‘/home/aitor/soft/comsport/comsport-0.1/src’
make[1]: Leaving directory ‘/home/aitor/soft/comsport/comsport-0.1/src’
Making install in po
make[1]: Entering directory ‘/home/aitor/soft/comsport/comsport-0.1/po’
linguas=””; \
for lang in $linguas; do \
dir=/opt/nike/share/locale/$lang/LC_MESSAGES; \
/bin/bash /home/aitor/soft/comsport/comsport-0.1/install-sh -d $dir; \
if test -r $lang.gmo; then \
/usr/bin/install -c -m 644 $lang.gmo $dir/comsport.mo; \
echo “installing $lang.gmo as $dir/comsport.mo”; \
else \
/usr/bin/install -c -m 644 ./$lang.gmo $dir/comsport.mo; \
echo “installing ./$lang.gmo as” \
“$dir/comsport.mo”; \
fi; \
if test -r $lang.gmo.m; then \
/usr/bin/install -c -m 644 $lang.gmo.m $dir/comsport.mo.m; \
echo “installing $lang.gmo.m as $dir/comsport.mo.m”; \
else \
if test -r ./$lang.gmo.m ; then \
/usr/bin/install -c -m 644 ./$lang.gmo.m \
$dir/comsport.mo.m; \
echo “installing ./$lang.gmo.m as” \
“$dir/comsport.mo.m”; \
else \
true; \
fi; \
fi; \
done
make[1]: Leaving directory ‘/home/aitor/soft/comsport/comsport-0.1/po’
make[1]: Entering directory ‘/home/aitor/soft/comsport/comsport-0.1’
make[2]: Entering directory ‘/home/aitor/soft/comsport/comsport-0.1’
make[2]: Nothing to be done for ‘install-exec-am’.
test -z “/opt/nike/doc/comsport” || /bin/mkdir -p “/opt/nike/doc/comsport”
/usr/bin/install -c -m 644 README COPYING AUTHORS ChangeLog INSTALL NEWS ‘/opt/nike/doc/comsport’
make[2]: Leaving directory ‘/home/aitor/soft/comsport/comsport-0.1’
make[1]: Leaving directory ‘/home/aitor/soft/comsport/comsport-0.1’

Now the driver is built and installed it’s time to test:
/opt/nike/bin/comsport -h
Comsport v0.1, driver for sportband+
no sportband found.

In the first try the device wasn’t found, reviewing driver source code I found that ID Device is hardcoded in DeviceCom.cxx file, so I change the following line to match with my Device ID and recompile the source code:
if (dev->descriptor.idVendor == 0x11ac && dev->descriptor.idProduct == 0x5455) {

After that I recompiled source files and executed again the driver binary but unfortunatelly a “Segmentation fault” is generated. Taking a look to syslog some command failed:

[19429.453915] usb 1-1: usbfs: USBDEVFS_CONTROL failed cmd comsport rqt 161 rq 1 len 8 ret -110
[19430.453924] usb 1-1: usbfs: USBDEVFS_CONTROL failed cmd comsport rqt 161 rq 1 len 64 ret -110
[19431.453925] usb 1-1: usbfs: USBDEVFS_CONTROL failed cmd comsport rqt 161 rq 1 len 64 ret -110
[19432.453938] usb 1-1: usbfs: USBDEVFS_CONTROL failed cmd comsport rqt 161 rq 1 len 16 ret -110
[19432.454022] traps: comsport[24765] general protection ip:406b57 sp:7ffc1a9f8c68 error:0 in comsport[400000+c000]

At the end driver needs to be modified to work properly with my Sportwatch, so I need to dig deeper if I want to manage this device natively on Debian. A simpler option would be run a virtualized Microsoft Windows and install the Nike drivers.


“If things go wrong, don’t go with them.”
–Roger Babson

Install and set up BackupPC on CentOS 7

Some months have passed from my last entry but I hope this year I can spend time to write new ones.
At this time I want to explain steps that I followed to install and set up BackupPC software on my server.
# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)

Let’s start installing the proper CentOS package.
# yum install BackupPC
In this first step I found a failure due to an unmet dependency, I could fix it with the following command:
# package-cleanup --cleandupes
As a reader have commented if you add rpmforge and epel repositories you can prevent the previous problem, so let’s document how to add them:

RPMforge: repository is a utility that is used to install third party software packages under Red Hat Enterprise Linux (RHEL) and Community ENTerprise Operating System (CentOS). It provides more than 5000 software packages in the rpm format for these Linux distributions.

# wget http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm
# rpm -Uvh rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm

Importing repository key:
# wget http://dag.wieers.com/rpm/packages/RPM-GPG-KEY.dag.txt
# rpm --import RPM-GPG-KEY.dag.txt
# ls -lrt /etc/pki/rpm-gpg
...
-rw-r--r--  1 root root 1672 Apr 15  2011 RPM-GPG-KEY-rpmforge-dag
...

Installing packages from RPMforge:
# yum --enablerepo=rpmforge install PACKAGE_NAME

EPEL (Extra Packages for Enterprise Linux) is open source and free community based repository project from Fedora team which provides 100% high quality add-on software packages for Linux distribution including RHEL (Red Hat Enterprise Linux), CentOS, and Scientific Linux. Epel project is not a part of RHEL/Cent OS but it is designed for major Linux distributions by providing lots of open source packages like networking, sys admin, programming, monitoring and so on.

# wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-6.noarch.rpm
# rpm -ivh epel-release-7-6.noarch.rpm
Preparing... ################################# [100%]
Updating / installing...
1:epel-release-7-6 ################################# [100%]

Checking if everything is fine:
# yum repolist
...
Loading mirror speeds from cached hostfile
* base: centos.mirror.xtratelecom.es
* epel: epel.check-update.co.uk
* extras: centos.mirror.xtratelecom.es
* nux-dextop: mirror.li.nux.ro
* rpmforge: fr2.rpmfind.net
* updates: centos.mirror.xtratelecom.es
...

How to use:
# yum --enablerepo=epel info PACKAGE_NAME
Now BackupPC package is fully installed so it’s time to configure Apache web server, I prefer backup Apache config files:
# systemctl status httpd
● httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Docs: man:httpd(8)
man:apachectl(8)
# cp /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf.`date +%Y%m%d`
# cp /etc/httpd/conf.d/BackupPC.conf /etc/httpd/conf.d/BackupPC.conf.`date +%Y%m%d`
# vim /etc/httpd/conf/httpd.conf
...
User backuppc
...
# /usr/sbin/httpd -v
Server version: Apache/2.4.6 (CentOS)
Server built: Nov 19 2015 21:43:13
# vim /etc/httpd/conf.d/BackupPC.conf
...
Require ip 10.111.1
...
# /usr/sbin/apachectl configtest
Syntax OK
# systemctl enable httpd
# systemctl is-enabled httpd
enabled

Ensure that all BackupPC files are owned by backuppc owner and apache group:
# chown -R backuppc.apache /etc/BackupPC /etc/httpd/conf.d/BackupPC.conf /var/lib/BackupPC
As optional step I changed owner and group to the installation directory:
# chown -R backuppc.backuppc /usr/share/BackupPC

Configure Backuppc Apache user password:
# htpasswd -c /etc/BackupPC/apache.users backuppc
New password:
Re-type new password:

Check access to Apache from remote host:
$ telnet g8-1 80
Trying xxx.xxx.x.xxx...
telnet: Unable to connect to remote host: No route to host

As the previous check points that the server firewall is blocking HTTP service, so let’s check if 80 port is opened is the server:
# firewall-cmd --zone=public --query-port=80/tcp
no

Open this port to be able to connect with Apache web server:
# firewall-cmd --permanent --zone=public --add-port=80/tcp
success
# firewall-cmd --reload
success
remoteHost$ telnet g8-1 80
Trying xxx.xxx.x.xxx...
Connected to g8-1.
Escape character is '^]'.

Configure backuppc OS user:
# usermod -s /bin/bash backuppc
# passwd backuppc
# su - backuppc
$ cd && ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/BackupPC/.ssh/id_rsa):
Created directory '/var/lib/BackupPC/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/BackupPC/.ssh/id_rsa.
Your public key has been saved in /var/lib/BackupPC/.ssh/id_rsa.pub.
The key fingerprint is:
18:7e:a2:7b:ae:56:a9:f2:55:6d:92:f4:60:65:ea:aa backuppc@g8.acme
The key's randomart image is:
+--[ RSA 2048]----+
| o |
| + |
| . = |
| . * = |
| +.S + |
| .o= o |
| .oo |
| . o+. |
| +E+. |
+-----------------+

With all these points the basic BackupPC configuration is done so let’s start the services:
# systemctl enable backuppc
backuppc.service is not a native service, redirecting to /sbin/chkconfig.
Executing /sbin/chkconfig backuppc on
# systemctl start backuppc
# systemctl start httpd

Checking if it’s possible to get status information from the backuppc server using command line I face this problem:
# su - backuppc
-bash-4.2$ /usr/share/BackupPC/bin/BackupPC_serverMesg status info
Can't connect to server (unix connect: No such file or directory)

To fix previous problem is needed create the following directory:
# mkdir /var/run/BackupPC
# chown backuppc.backuppc /var/run/BackupPC
# systemctl restart backuppc

Checking again is observed that backup server is answering properly:
$ /usr/share/BackupPC/bin/BackupPC_serverMesg status info
Got reply: %Info = ("ConfigLTime" => "1457211930","DUlastValueTime" => "1457211930","ConfigModTime" => "1457211709","nextWakeup" => "1457215200","DUDailyMax" => 3,"DUDailyMaxTime" => "1457211930","Version" => "3.3.1","pid" => 5109,"DUlastValue" => 3,"HostsModTime" => "1426394632","startTime" => "1457211930");

BackupPC_Web

If you have problems starting automatically backuppc systemd unit take a look to this link.
Other useful references would be:


“Whenever you find yourself on the side of the majority, it is time to pause and reflect.”
–Mark Twain