Mar. 25th, 2015 03:09 am
pheloniusfriar: (Default)
The days pass like seconds and the minutes become months, it has been a challenging time for me. I have so much to write about, but no time at all (I managed to visit SNOLAB 2km underground in Sudbury last Friday and will be presenting my work on the ATLAS detector upgrade at the Large Hadron Collider at the National Conference on Undergraduate Research (NCUR) in Spokane, Washington next month). I got home today and fell asleep from 2PM to 10PM and have been up since, just trying to catch up with myself and spend a few minutes with my own thoughts (and apparently my blog). I still need to finish posts about the International Astronautical Conference and my multiple trips to Germany. I am wondering if, at this point, they will never happen...

Here's a short video of what it's like to go to SNOLAB (hint: holy shit, mind officially blown for having been able to see it with my own eyes...):

The reason for my post is I've realized that I need to upgrade my Linux server from Slackware 13.37 to Slackware 14.1 as I've needed to install software that required more modern libraries. To that end, I just wanted to reproduce a post that I made in June 2011 on Livejournal that doesn't seem to be here. This is necessary to frame things for a coming post on what was required to do the upgrade and document any issues I encountered. Since this blog is search-indexed, hopefully it can help someone who is also trying to do cool things with their computers. Keep in mind this is a reposting of a historical entry from a few years ago. With that said, the server in question has been rock frickin' solid the whole time. I think I needed to reboot it once in that entire time because of some issue (it has been rebooted more than that because of power failures and deciding to move it, but only once because of a problem... at this point, 'uptime' says it has been up for 110 days now... since I moved it to the other side of the living room).

Going from stable hardware to a functional Internet server is not an instant process. For instance, deciding how to install the operating system and getting it to boot and how to partition the drive for data takes a lot of work — especially when "state of the art" is a moving target. When I last installed a system, the idea of trying to boot off a RAID 1 partition (mirrored disks... in case one disk dies, the exact same data is on the second one as well) was not possible. In my first post on the topic, I had been planning to have one non-mirrored partition on each of the two drives (for redundancy) that I would have had to manage manually so I could boot off either disk if the other failed. On my current server, I have a separate (non-mirrored) boot disk (it also had the operating system on it) and then a pair of disks in a RAID 1 configuration for my data. I learned, however, that LILO (the LInux LOader) could now boot a RAID 1 partition! Well, that was going to save me a lot of manual configuration and provide better data safety, so that sounded like a great idea. Right? I mean, right?

Well, I had already partitioned my hard disk as follows (sda and sdb were identically partitioned... and note in case you didn't know or are used to other Unices, Linux blocks are indicated as 1K, not 512 bytes):
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048      206847      102400   83  Linux
/dev/sda2          206848     8595455     4194304   82  Linux swap
/dev/sda3         8595456   176367615    83886080   fd  Linux raid autodetect
/dev/sda4       176367616  1953525167   888578776   fd  Linux raid autodetect
Where sda1/sdb1 [100MiB] was going to be where I stored the operating system image to boot off of (manually placing copies on each filesystem and installing the LILO bootloader individually on each disk's Master Boot Record (MBR)) and mounted as /boot once the system was running, sda2/sdb2 [4GiB] would be non-mirrored swap partitions (both used simultaneously to give 8G of swap), sda3/sdb3 [80GiB] was going to be the RAID 1 (mirrored) / (root) partition, and sda4/sdb4 [some crazyass number of GiB, like 850 or something] was going to be the RAID 1 (mirrored) with a Logical Volume Manager (LVM) volume group (VG) on top of it (more on that later...). A quick note on the swap partitions: the fact that I did not use a swap file on a RAID partition does mean that if the system is heavily loaded down and swap space is being used and a disk fails, that stuff could crash (programs and possibly even the operating system). However, if swap space is needed, the performance hit of putting it on top of a software RAID implementation would be unforgivable. The system could crash, but if it's brought back up, there's enough swap on one disk to run the system fine on the one functioning swap partition). A compromise that I feel is acceptable to take.

I went ahead and created the mirrored partitions /dev/md0 and /dev/md1 with /dev/sda3:/dev/sdb3 and /dev/sda4:/dev/sdb4 respectively [mdadm --create /dev/md{0|1} --level=1 --raid-devices=2 /dev/sda{3|4} /dev/sdb{3|4}] and created EXT4 filesystems on /dev/sda1, /dev/sdb1, and /dev/md0 (the mirrored disks from the previous step) [mkfs.ext4 /dev/{sda1|sda2|md0}]. I mentioned earlier that LILO can now boot off RAID 1 partitions, but I did not know that at the point that I had done all of this... I installed the Slackware64 13.37 distribution and then started investigating how to do the LILO boot thing properly with my particular configuration. It was then that I learned about the new capability and realized that it would be best if I rolled things back a little and mirrored sda1 and sdb1. I copied the files out of that filesystem into a temporary directory I created, rebooted the system so I could change the partitions from type 83 "Linux" to type fd "Linux raid autodetect" and mirror the partitions. Sadly... the temporary directory I had created was on the RAMdisk that is used by the installation load and when I rebooted, all the files were gone. It was a laughing (at myself) head-desk moment... doh! Well, not such a bad thing (I just needed to re-install the OS, so not a problem at that stage, heh). It also gave me the chance to redo things with the new configuration. I would make /dev/md0 the /dev/sda1:/dev/sdb1 mirrored partition and go from there.

And here's where things took a turn for the argh... I knew I had to re-number the other mirrored partitions so that the /dev/hda4:/dev/hdb4 partition went from /dev/md1 to /dev/md2, and the /dev/hda3:/dev/hdb3 partition went from /dev/md0 to /dev/md1 so I could make the boot one /dev/md0. How to do this? Well, after much research (this is all new functionality, so it's not very well documented anywhere), you stop the mirrored partition (say /dev/mdX for the mirrored partitions /dev/sdaN and /dev/sdbN), re-assign it a new "superblock minor number" (let's say Y), and start it back up again [mdadm --stop /dev/mdX; mdadm --assemble /dev/mdY --super-minor=X --update=super-minor; mdadm --assemble /dev/mdY /dev/sdaN /dev/sdbN] (boy, did it take a long time to figure out how to do that!). Did /dev/md2, then /dev/md1, then created /dev/md0 and everything looked good. Did a "cat /proc/mdstat" and everything was happily mirrored and chugging away. Created an EXT4 filesystem on /dev/md0 and everything looked good. I wiped the filesystem on /dev/md1 to make sure I had a clean installation, did a fresh installation, and rebooted the computer just for good measure and... all the RAID device numbering was messed up! I thought it was hard to figure out how to do the stuff I just did... it had nothing on figuring out how to fix this new problem! The clue came when I looked at the information associated with the RAID devices [mdadm --detail /dev/mdX] and saw that there was a line like "Name : slackware:1" where the number after the "slackware:" seemed to match the "mdX" number assigned... and also corresponded to the number I used to create the RAID partition (which the --update=super-minor command didn't seem to change). I was wondering if this was something that was autogenerated at boot time or whether it was actually in the RAID configuration information stored on the disk... I used the program "hexdump" to look at the contents of the first few kilobytes of data stored in the RAID device block on the disk [hexdump -C -n /dev/mdX] and sure enough, the string "slackware:X" was there. I then had to start the search for how to change the "Name" of a RAID array as apparently this was very new and never used functionality. The built-in help indicated it could be done, but the syntax didn't make sense. Ultimately, I figured it out and changed the name (and re-changed the minor number in the superblock as well just to be sure) [mdadm --stop /dev/mdX; mdadm --assemble /dev/mdY --update=name --name=slackware:Y /dev/sdaN /dev/sdbN; mdadm --assemble /dev/mdY --update=super-minor /dev/sdaN /dev/sdbN; mdadm --assemble /dev/mdY /dev/sdaN /dev/sdbN] and this technique proved reliable and worked like a charm every time (rebooted the system to make sure everything stuck, and it did, yay!). I understand that this is Slackware functionality to guarantee what mdX number gets assigned to a RAID array (where other operating systems can, and do, randomly make assignments), so it's ultimately a Good Thing™, but it's not well documented.

So, it was time to finish up the installation by installing the bootloader. The configuration (in /etc/lilo.conf on the /etc directory for the operating system installed on the disk, e.g. /mnt/etc/lilo.conf if that's where the disk partition with the OS is mounted) was pretty much this (it was having problems with my video card, so I left out the fancy graphical console modes):
lba32 # Allow booting past 1024th cylinder with a recent BIOS
boot = /dev/sda
# Append any additional kernel parameters:
append=" vt.default_utf8=0"
timeout = 50  # In 1/10ths of a second
vga = normal
# Linux bootable partition config begins
image = /boot/vmlinuz
root = /dev/md1
label = Linux
read-only # Partitions should be mounted read-only for checking
Fairly simple stuff, the "boot" line specified the "whole disk" so the bootloader would be installed in the Master Boot Record (MBR) of the drive, it would load the Linux image, and use /dev/md1 as the root filesystem. Simple, except it didn't work!!! LILO, when run [mount /dev/md1 /mnt; mount /dev/md0 /mnt/boot; chroot /mnt lilo -v -v -v], would generate the message "Inconsistent Raid Version information on /dev/md0". Sigh... now what? Well, it turns out that sometime over the past year, the "metadata format" version of the "mdadm" package had changed from 0.9 to 1.2... and LILO did not know how to read the 1.2 version metadata and so assumed the superblock of the RAID array was corrupted (there's a bug report here). It could, according to what I read, understand the 0.9 metadata format, so... copied the files off the /dev/md0 partition (this time onto the actual hard drive, heh) and re-initialized the partition to use the old metadata format (again, it took a huge amount of time to track down the poorly documented command) [umount /mnt/boot; mdadm --stop /dev/md0; mdadm --create /dev/md0 --name=slackware:0 --metadata=0.90 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1; mkfs.ext4 /dev/md0; mount /dev/md0 /mnt/boot]. Once that was done, the boot files could be copied back and lilo run again. When I first tried it, I only installed on /dev/sda and when I tried to boot, it just hung (never even made it to LILO). This confused me, so I checked the boot order of the disks in the BIOS settings. The "1st disk" was set to boot first and then then "3rd disk" if it couldn't. It took me a while, but I eventually tried (out of desperation) to switch the boot order of the disks and ... voila... the LILO boot prompt! Turns out that the disk that Linux thinks is "a", the BIOS thinks is the "3rd" disk, and "b" was the "1st" disk. Live and learn, eh? The trick was it still needed to be installed on both hard disks (each has a separate MBR), so "lilo" had to be run and then the "boot" parameter had to be changed to /dev/hdb in lilo.conf and lilo had to be run again [just "chroot /mnt lilo -v -v -v" once the filesystems were already mounted]. Once I installed on both /dev/sda and /dev/sdb it didn't matter which one I set first, so that was then working the way it should.

Great... right? Sigh... the kernel would load and then panic because it could not figure out how to use the root filesystem (it would give the message: "VFS: Unable to mount root fs on unknown-block(9,1)". I remembered from my digging that the RAID disk devices had the major device number "9", and the root minor device (from above) was "1", so it knew it was trying to load that device, but couldn't. To me, that said that the RAID drivers were not in the kernel and that I would need to build a RAMdisk with the proper kernel modules and libraries for it to properly mount the RAID device as root. I'd had enough and went to bed at that point and took it up the next day. Again, what a pain to find documentation (one of the reasons why I'm writing this all out for posterity's sake... maybe I should write a magazine article, heh)! The trick was to use the "mkinitrd" script that comes with Slackware, and to do that you need to have the installed OS available because the command doesn't seem to be installed on the DVD's filesystem. Once the operating system is mounted [mount /dev/md1 /mnt; mount /dev/md0 /mnt/boot], create a copy of the /proc/partitions file on the disk version of the OS [cat /proc/partitions > /mnt/proc/partitions] (it will be the only file in that proc directory). Edit the /mnt/etc/lilo.conf file to include the line "initrd = /boot/initrd.gz" right below the "image = /boot/vmlinuz" line (and make sure the boot line is "boot = /dev/sda"). Then run the mkinitrd command to create the RAMdisk image and lilo to install it [chroot /mnt mkinitrd -R -m ext4 -f ext4 -r /dev/md1; chroot /mnt lilo -v -v -v]. Change the /mnt/etc/lilo.conf file to "boot = /dev/sdb" and run lilo again [chroot /mnt lilo -v -v -v] to install LILO's configuration on both disks. At this point, you need to delete the "partitions" file on the mounted OS image (it should be an empty directory for the virtual /proc filesystem when it runs) [rm /mnt/proc/partitions].

And that, my friends, is how I spent my summer vacation ;). The system booted (I tried switching boot order via BIOS and it worked fine), mounted its root filesystem, and loaded my shiny new Slackware64 13.37 installation in all its glory. Finally!!! But my journey is far from over... I now have to configure the system and integrate it with the framework I already have running so it could eventually take over from my current server (my plan was to move the pair of 200G disks from the current server to the new one and use them as part of a system backup strategy). I had to install the LVM partition for my data and decide how to carve up the space into Logical Volumes (LVs). I have to decide whether I want to stick with NIS or move to LDAP for authentication (I've been meaning to for a while, but know it's going to be a colossal nightmare), I have to configure Samba (for file and print sharing with Windoze machines), I have to move my web sites to the new box (including migrating the MySQL databases for the Wordpress installations), and then migrate the data from my old server to the new data partitions. Sigh... it's a huge job with so many different technologies (each of which requires a great deal of expertise to use).

Actually, the next thing I need to get working after the upgrade is to sync my server's clock with the NRC NTP servers since the hardware clock on its motherboard swerves like a drunken landlubber on a crooked dock. But that will likely have to wait for the summer.
pheloniusfriar: (Default)
I have spent weeks (months? ... but certainly not too hard) trying to find the answer to how to use X11 with the XFCE window manager on my Slackware server at the recommended resolution of my monitor of 1440x900. This has been startlingly hard to get information on (along with information on how to configure the onboard video hardware on my motherboard), so I have decided to document it here in hopes that anyone else having this problem can find a quicker solution to their woes. I have so much other stuff to post (seriously, I met Buzz Aldrin and Bill Nye and Neil deGrasse Tyson earlier this month, how cool is that?), but I just haven't had any time at all these past few months... I have been slammed so hard with work (both work work and school work and my own work... yes, I know that's 150%). Until then... anyone who doesn't care about xorg.conf files can safely skip the rest of this post ;).

Now, before I go any further, full honours to Arun Viswanathan for figuring it out first: ... thank you Arun!!!

Here are the steps: find the manual for your monitor. I have an eMachines E19T6W monitor, and finding the PDF of the User's Manual wasn't hard at all. Next, there were two sections giving technical information... the Specifications section and a section on Video Modes. The Specifications section stated that the monitor had a 1440x900 native pixel configuration (which you always want to use if you can) and a 0.2835mm x 0.2835mm pixel pitch (which gives a display size of about 408mm x 255mm, which is needed for the xorg.conf file eventually). In the video mode section, it specifies a whole bunch of resolutions, but the 1440x900 mode is given as "Mode 15 - VESA 1440x900 - Horizontal Frequency 55.935kHz - Vertical Frequency 59.887Hz - Available in DVI Mode (19-inch Model)? Yes.". The first set of sorcery is the "gtf" and "xrandr" commands. The former automagically generates a Modeline for the resolution setting you want, and the latter allows you to add it and test it out interactively. The second set of sorcery involves permanently setting up an "xorg.conf" file and XFCE configuration to implement it permanently going forward. To interactively test out the hardware, first get the Modeline needed by running "gtf <horizontal resolution in pixels> <vertical resolution in pixels> <vertical refresh rate in Hz>":
  gtf 1440 900 59.887
which resulted in the output:
  # 1440x900 @ 59.89 Hz (GTF) hsync: 55.81 kHz; pclk: 106.27 MHz
  Modeline "1440x900_59.89"  106.27  1440 1520 1672 1904  900 901 904 932  -HSync +Vsync
This then needs to be used to create that video mode using the "xrandr" program. Note that the word "Modeline" is left off, and also note that I got the tag "VGA-0" as being the port I was using just by running the "xrandr" program with no parameters to get the current state of the display subsystem (it showed VGA-0 connected, and DVI-0 disconnected, which is how my system is configured: I am putting my VGA connection through a KVM so I can share my desktop [in the physical sense] between my server and a desktop PC... I need to get a DVI-capable KVM someday, but it's not a high priority by any means).
  xrandr --newmode "1440x900_59.89"  106.27  1440 1520 1672 1904  900 901 904 932  -HSync +Vsync
  xrandr --addmode VGA-0 1440x900_59.89
  xrandr --output VGA-0 --mode 1440x900_59.89
And this switched the video mode to 1440x900 (you can check just by running "xrandr" with no parameters)! Now one important thing to be said is it actually looked pretty shitty... but the good news is that this was just an intermediate step and as I type this, the display looks marvy! The solution to the quality of the display and fonts and stuff was a two step process. The first step was to create an "/etc/X11/xorg.conf" file on the system for the particular configuration I was using and to restart X11/XFCE again. After the configuration file was created, XFCE came up in 1024x768 mode (which I presume is a fallback setting because just about everything supports that video mode). Typing "xrandr" showed that "1440x900" was a mode that was now supported, but it had not been selected by XFCE. The second part of the solution was to go to the "mouse menu" (in the lower left corner) and run Settings->Display, select the Resolution "1440x900" from the pulldown, and Apply the change. Once that was done, all the weird font and display quality issues I had doing it the manual way above by forcing the issue with the "xrandr" program went away and I had a beautiful desktop to work from! Yay! Just to be sure, I exited X11/XFCE again and restarted it and the settings stayed, so it's a permanent fix.

Taking a step backward, one of the things that I wasn't sure was set up right was the video hardware. It's the onboard video for the old Asus M4A78LT-M LE motherboard I have in my server (if it ain't broke, don't fix it y'all), and I wasn't sure that the right drivers were being used (I was flailing pretty hard trying to get this to work and followed all sorts of weird paths on my way). I should further mention that I am using the onboard video because my server is really a server and I don't need much in the way of video display hardware capabilities (99.9% of the time, I'm connecting to it over the network from another system running X11 or just through SSH or something, so I hardly ever use the console unless I'm doing serious maintenance on it). The video chip is an ATI 760G class chip (Radeon 3000 family), and I read innumerable old posts about how it was not properly supported by the X servers of the day and more recent posts about how ATI has dropped all support for them from their proprietary drivers for Linux systems. It was not looking good at first, but it turns out there is an open source alternative for this class of video hardware that goes by the name "xf86-video-ati" (and shows up in the kernel output as the "radeon" driver). I initially thought this driver was not being invoked even (as I said it was really hard to find information and much of it was conflicting, confusing, or just plain false), but when I finally knew what to look for, I realized the correct driver was running and that it was simply a configuration issue I was dealing with. The breakthrough here happened when I found a Wiki page on it: Once I had that, it was smooth sailing with the configuration options for the driver and my card (which I have reproduced below).

The last thing I wanted to mention is what is required to create a working "xorg.conf" file. Again, one would think that this would be easily accessible information, but one would be wrong... Not to beat around the bush, the first thing that is needed is a "Device" section. This could be quite simple and only contain key/value pairs for "Identifier" and "Driver". I went a bit further with actual configuration parameters, but it's the "Identifier" that is critical to building a working "xorg.conf" file. I used the Wiki page above to get the information needed, and used the model of my motherboard as the identifier value. The next thing that is required is a "Monitor" section. Again, this could have as little as the "Identifier" and "Modeline" keys. In my case, this was given the value "E19T6W", but these are just text strings and could just as easily have been "Fred" or "Wilma", just pick something that makes sense for the monitor you have and the way your brain works (and this is the same for all the identifier values). I went further and used information from the User's Manual for my monitor and put in the minimum and maximum values for the horizontal and vertical frequencies, and also put in the physical dimensions of the screen so that things would display at the correct size (12 point fonts should be 12 point fonts in physical dimensions on the screen, etc.). Fyi, I got the values I used by multiplying the dot pitch by the horizontal and vertical resolutions, but verified those numbers with a ruler, and they were correct. It was in the "Monitor" section where the "Modeline" generated by "gtf" went. Finally, there needs to be a "Screen" section that pulls it all together. I gave this section the uninspired "Identifier" of "Default Screen", but here a pointer to the "Device" and "Monitor" sections to use for the screen are included using their identifier names. The rest of that section is pretty much boilerplate (including the "Display" subsection), but it is probably good to have multiple resolutions available in case you want to swap displays out at some point (if the display you are using fries) as there is usually some keyboard combination that allows you to switch video modes on the fly between supported modes.

The final "/etc/X11/xorg.conf" file that worked for me is as follows (note that I used the Modeline label "1440x900" rather than "1440x900_59.89" as provided by the "gtf" program as I didn't need to support multiple versions of the 1440x900 resolution):
Section "Device"
  Identifier "M4A78LT-M LE"
  Driver "radeon"
    # software cursor might be necessary on some rare occasions,
    # hence set off by default
  Option "SWcursor" "off"
    # supported on all R/RV/RS4xx and older hardware, is on by default
  Option "EnablePageFlip" "on"
    # valid options are XAA, EXA and Glamor. Default value varies per-GPU
  Option "AccelMethod" "EXA"
    # enabled by default on all radeon hardware
  Option "RenderAccel" "on"
    # enabled by default on RV300 and later radeon cards
  Option "ColorTiling" "on"
    # default is off, otherwise on. Only works if EXA activated
  Option "EXAVSync" "off"
    # when on increases 2D performance, but may also cause artifacts\
    # on some old cards. Only works if EXA activated
  Option "EXAPixmaps" "on"
    # default is off, read the radeon manpage for more information
  Option "AccelDFS" "on"

Section "Monitor"
    Identifier      "E19T6W"
    HorizSync       30.0-75.1
    VertRefresh     50.0-75.0
    DisplaySize	    408 255
    Modeline	    "1440x900"  106.27  1440 1520 1672 1904  900 901 904 932  -HSync +Vsync

Section "Screen"
    Identifier "Default Screen"
    Device     "M4A78LT-M LE"
    Monitor    "E19T6W"
    DefaultDepth	24
    SubSection "Display"
       Viewport   0 0
       Depth     24
       Modes    "1440x900" "1280x1024" "1024x768" "800x600" "640x480"
If you have been struggling with something similar to this, I hope this helped you...

pheloniusfriar: (Default)
I am beginning the process of building, installing, and configuring a new Linux server at home and have decided to document my research and experiences here as I go. All of these entries will be tagged with "linux" for later ease of reference. The server I have currently is running the Slackware 12.2 distribution and provides me with NFS and Samba local file system sharing (for other Linux and Windows systems respectively in the house, over both CAT-5 Ethernet wiring and WPA-2 secured wireless communication), remote SSH connectivity, web services (I have holes punched in my firewall and port forwarded to my server for SSH and HTTP), and local MySQL services (mostly for web stuff, specifically I have Wordpress running on it). The current server has a boot disk and a pair of 200G high performance drives in a RAID 1 configuration (simple mirroring) using software RAID drivers. On top of that it is using the Logical Volume Manager technology. On top of that it is using the ReiserFS (version 3). I have the disks broken into a bunch of different volumes and it's been great because I've been able to grow them as I've needed more space for them... an advanced feature that's super awesome (that's the technical term) to have on a home-based system. I've filled up the 200G disks pretty much and have had to start "off lining" stuff, but it's worked out pretty well. I seem to have finally gotten myself a stable Internet connection from Teksavvy over cable (the phone line stuff was an unstable and deteriorating mess because Bell refused to maintain their infrastructure and Teksavvy was dependent on them for "the last mile"). So it's now nominally 15Mbps down and 1Mbps up... certainly respectable for a home-based node (and something only medium and large organizations could do only a decade ago)! The processor board in the existing server is a little underpowered, but seems to keep up fine with basic web, database, and file server demands (an AMD Athlon class 2.8GHz CPU and only 1GB of RAM), so that's going to be upgraded as well, and this board will be re-purposed to be my new desktop system for my room (my current one is a poor old IBM NetVista that I pulled from a scrap heap).

So... where am I now... well, a while back (quite a while... I'm embarrassed to say how long, but they were still relatively inexpensive) I picked up a pair of 1TB hard drives. Those are going to form the core of my new system. I had been struggling with what Linux distribution to use, but most of what's out there these days give me a rash for use on anything but a netbook (yes, Ubuntu, I'm looking at you). I've been using Slackware since... well, since I moved away from the SLS distribution (that was the first Linux "distribution" for all you newcomers... and I should mention that I have been using Linux for real work since kernel version 0.11... yikes). I like the fact that Slackware is "hard on the bare metal" like a good Unix server platform should be... minimal bells and whistles, and some knowledge is required to get it working, but when it does, it just goes and goes and goes without a hickup. Exactly what a server should do (if you're interested in desktop systems, I have been told Slax takes care of all the stuff that is annoying about trying to use Slackware as a desktop computer... of course Ubuntu and its ilk all have their places too). I had become somewhat disillusioned with Slackware over the past couple of years... the development group underwent major changes and all sorts of stuff seemed to have been half done (and Patrick, the "Benevolent Dictator for Life" of the Slackware distribution, had been sick for some period over the past few years). It was also hard to track down the development team and provide feedback, etc. I was looking at my options when, lo and behold, they released a new version of Slackware a few weeks ago: Slackware version 13.37. Besides having an entertainingly cool version number (the major release of Slackware dot the minor release of the 2.6 kernel they used, lol), it brought a bunch of stuff up to date and provided useful information on how to connect with the development team so I could report bugs, etc. To that end, after the choice of hard drives, my choice of operating system is easy: Slackware 13.37. I have continuity that way, and it's always been good to me and I know all the features I use are already in it.

So what I'm going to do is build the system around the 1TB drives and do a basic cut at the LVM volume sizes to start (so I can migrate my data from my old system eventually). Once I've finished the data migration and am ready to retire the current box from service, I am going to take the 200G RAID drives and move them into the new server. I will be using the 200G drives as backup storage for the 1TB drives... I will take periodic snapshots of the root filesystem (including /usr and /opt), but will also do backups of all the dynamic data in /var, the home directories, the web directories, and various shared directories. I will be backup up media (e.g. MP3s as I rip my CDs) onto DVDs for storage in case of failure (it'll be incremental, in 4GB chunks, as I create the files since the data will not change once it's been created). Having a good backup strategy is something I don't have right now (yes, bad Friar!). In the current system, if the boot drive goes down, I lose my OS and all the configuration associated with it (I have taken snapshots, but I don't have a proper strategy). In fact, when I upgraded my OS from to Slackware 12.2, I installed a new boot disk because my old boot disk was getting old. Good thing too, because the old disk (which I was using for temporary storage on an "as needed" basis) had a head crash about a week ago... yikes! It's only a matter of time, not an "if" but a "when". Currently I'm also relying on the RAID configuration to save me if I lose one of the 200G drives... not what I'd consider a viable go-forward strategy either.

Anyway, in the new server, I'm not going to have a "boot disk" anymore, I'll use a small boot partition on the 1TB drives (mirrored) to boot off of. That way, if I lose one of the drives, I'll be able to boot off the remaining drive and start the recovery process. I'm planning to have a root volume with all of my OS and application and configuration data that is relatively static. I'm contemplating a volume for /var this time around and possibly one for /tmp (a classic failure mode is for /tmp to fill up, but that's more prevalent on systems that have a lot of users on it, so I'm debating whether to perhaps just make one volume for both /tmp and /var). Given that I have disk space to burn and I can grow volumes as needed, I'm probably going to try the separate filesystems to see how it goes. The rest of the disk will be seeded with data partitions large enough to hold all the data I have now, plus my expected size for the next year or so. That will leave lots of extra space on the TB drives for future expansion of the volumes as needed. That left the choice of filesystem to be made... I'm running ReiserFS now and it's been rock solid, but it has fallen out of favour since Mr. Reiser was convicted of the first degree murder of his Russian mail order bride (what? yup...). ext3 was the previous default filesystem type, but ext4 has been released since and is considered stable. ext4 has most of the features I use on the ReiserFS and can also be dynamically resized (a critical piece of functionality for me). The other filesystem is the Btrfs... which incorporates a lot of the design ideas of the next generation ReiserFS (which people have continued working on despite the incarceration of its namesake because it did have some great ideas in it), and is widely considered (even by Theodore Ts'o, the architect of ext3 and ext4) to be the next destination for Linux filesystems (with everything else, including ext4, as stopgaps); however, the consensus is it's just not ready for prime time. While ReiserFS has done well for me, it has issues with multi-core processors and those sorts of limitations are addressed in ext4, Btrfs, and in the next generation (version 4) of the ReiserFS. However, because of technical concerns (amongst other things), ReiserFS4 has not made it into the kernel mainstream (although it's widely expected to sometime in the near future), so that really only leaves ext4 as being the logical choice for me at this point. Done.

LILO is the only bootloader that ships with Slackware, so that's an easy one :).

Finally, there's the realm of applications. I have been none too happy with the purchase of Sun Microsystems by Oracle. Sun was on the hairy edge of acceptable at the best of times, but Oracle is and has been an "evil empire". With the purchase of Sun, they now own three very important technologies: Java, MySQL, and OpenOffice. There are various efforts underway to make proper 3rd party versions of Java (the Android OS uses its own Java Virtual Machine, for instance, but it's not a general purpose engine), but that's a ways off (Microsoft did hamstring the technology by attacking it with incompatible products that it called Java, but that seems to have settled down finally). I use MySQL for my primary database engine on my server and I went out looking for alternatives... nothing in the mainstream looked like it was ready... however, recently, the former architect of MySQL and a team of developers that had forked the source tree from MySQL in 2008 finally released a "general availability" version of their new (and compatible) database: Drizzle ( I will definitely look into substituting that from the outset. Also, most of the developers who had been working on the Open Source version of OpenOffice forked that source tree and created LibreOffice. Bug fixes actually appear to be happening in that project, so I will also be investigating that (OpenOffice still has major bugs that I reported to them back in 2003... it's pretty bad).

Well, that's where I'm at now... off to work and eventually to start work building my new server :).


pheloniusfriar: (Default)

September 2017

3456 7 89
10 11 1213 141516
171819202122 23


RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Sep. 24th, 2017 09:02 pm
Powered by Dreamwidth Studios