pheloniusfriar: (Default)
sed 's/\/></\/>\n</g' file.xml | m4 macros.m4 - |
    sed ':a;N;$!ba;s/\/>\n</\/></g' > file.out
But it worked... ick.

GNU m4 wasn't doing the specified (simple) macro substitutions on its own. I am wondering if it is because the XML file was just one big long line (26411 characters)? I didn't think line length was an issue for GNU m4, but it works fine when I break it into one line per XML statement (-ish). I can't be arsed figuring out what is going on right now so I will use this, ick, workaround for now (maybe some day... well, probably some day... I hate not understanding why things fail like that so I don't use the tool for something similar in the future and hope it works). Sigh.

(recommended by Lipps Inc. themselves as "possibly the BEST version of Funkytown ever")


Sep. 7th, 2017 04:39 am
pheloniusfriar: (Default)
I find it funny that whenever I type "fucking" into my phone, its heuristics kindly (and almost always correctly) suggest "autocorrect" as the next word.

pheloniusfriar: (Default)
I am forced to use Windows 8.1 for some of the work I do at Carleton (if I use Windows at home, I have either Windows 7 Pro or Windows XP, which I need for some of the stuff I have to run... it just won't run on Windows 7). Today, I just spent half an hour trying to change my password. Seriously. What the actual fuck? Went to Control Panel and Users and ... all kinds of settings all of which I don't give a rat's ass about, but none of them (including a hopeful sounding one about updating credentials for windows) allowed me to do a simple change of my password. Okay, go to Google... article after article about how to recover passwords (including youtube videos), but nothing about how to actually change the password. The solution? Cntl-Alt-Delete. Seriously. What the actual fuck? Cntl-Alt-Delete is the uninterruptable keystroke and in the past has been reserved for exceptional system level functions, not day-to-day administrivia like changing one's own password. If I wasn't running Classic Shell on the system to at least give myself a tolerable Start menu, I would be on here ranting about how much Windoze sucks 24/7 (or at least until I passed out from exhaustion in my own vitriol).

You can thank the lucky stars for Classic Shell.

Okay, now back to emailing a document, which I had to do from Windows on my dual boot system at work (it's on Windows because the software I had to use for the project only runs on Windows so it made sense for me to leave my files there... although I should mount the Windows file system on Linux so this doesn't happen again... this is the first time I just needed files from my Windows partition rather than having to run software on Windows so it didn't occur to me until just now... be assured it will be done this afternoon!).

Ugh, so much productivity wasted.

Speaking of which (although I consider musical and artistic experiences to be an important time sink rather than a waste of freakin' time like Windows)... and to provide some "value" so my posts aren't a total waste of your time.

Russian electric balalaika and beatboxing?

pheloniusfriar: (Default)
I just got the following email, which reads in part...

I am glad to reach you on behalf of Condensed Matter Physics 2017 Organizing Committee, after having a view at your vast expertise and eminent contribution in the research relevant to Theoretical and Condensed Matter Physics, we courteously welcome you as a speaker for the upcoming Condensed Matter Physics Conference from October 19-21, 2017 in New York, USA.

Hahahaha, ummm, no. Unless they are time travellers from the future and know something I don't about what I'm going to accomplish, I don't think they have anything on me other than my email address ;).

pheloniusfriar: (Default)
Warning: technobabble post (skip to video if such talk upsets you).

I am currently fighting to produce a silkscreen .dxf or .dwg file from a 3D model in Autodesk Inventor. It's funny that creating the 3D model of the part (including learning the program to do so) was almost trivial (the tutorials that came with the program are actually great, much to my surprise), but sending out the lettering to finish the panel is proving to be a huge muddled task (there are many forum threads on various ways to do it, but all of them agree that it sucks).

One of the first things I came up against was that I imported the design for the Hammond chassis I'm using as a STEP file and then pulled out the faceplate extrusion as a part to modify. Overall the process went very well, but when the part was created, the origin was placed in a weird spot on the part, and the part was rotated weird so that the front of it wasn't in the X-Y plane direction (with the long part along the X axis) like it intuitively should be to me (since it was pulled from a full assembly in the STEP file, it is not too surprising though). What was confounding is there didn't seem to be any way to orient it relative to the origin and in the direction I wanted to work with (when I applied what IRL is a horizontal constraint, I had to use the vertical constraint option on the sketch... not intuitive). Well, I just found out how to move the solid object around in 3D space... it was an option that was not visible normally, but I just had to pull down the Modify panel expando arrow and there it was: "Move Bodies". It allows for translation and rotation of the part. To get it, it's "3D Model->Modify->Move Bodies". There is a little cube in the tool's dialogue box that if you click on it, it gives you a pulldown to select the operation you want to perform. In my case, I needed to rotate the solid, then translate the corner I needed to the origin. To figure out how far I needed to translate it to align with the various axis planes, I used "Tools->Measure->Distance" and then just typed the numbers in.

As a note, Autodesk Inventor is available for free if you are a student or work at a university or college, and are not going to be using it for commercial purposes.

Hahaha, the video was a trap! ;)

(but catchy as all git out)
pheloniusfriar: (Default)
The FCC voted today to overturn net neutrality in the USA

The Internet: invented by the US, adopted by the world as a great tool of democracy and equity, abandoned by the US in 2017.

pheloniusfriar: (Default)
... about standards, is that there's so many to choose from.

I am bashing myself senseless for days on the Timed Text Track API in Javascript/HTML5 in trying to use it dynamically with video playing on a canvas (I'm working on the v0.9 version of the demo for the revived Midnight Stranger... v0.8 introduced support for touch screens). Nothing seems to be working the way the specification indicates it should, so I'm debugging one micro-step at a time now. The only thing that is making it tolerable is that I'm listening to the Samorost 3 game soundtrack (by Floex). Such an evocative collection of music! I got the edition of the game that came with the soundtrack (in MP3 and FLAC formats) and a digital art book from the game :). I also got Samorost 2 and Botanicula to round out my game collection from that group (I have been playing Machinarium for years and still dust it off every once in a while... it's a puzzle game, so it has limited replay value until enough time has passed that I've forgotten the solutions, but the scenery, characters, and music is still great). You can play Samorost 1 online if you are so inclined... the other games also have teaser levels online as well :). Anyway, I haven't actually started playing Samorost 3 yet, but I am listening to the music, which is quite pleasant.

The animation in these kinds of games of reminds me of the animation in the short film Krapooyo by Yannick Puig... and one of my favourite "fan vids" where someone put music from the psychedelic band Schpongle over top of Krapooyo :) ...

pheloniusfriar: (Default)
Was getting a weird error message during the tutorial for the CAD software I'm trying to learn (AutoDesk Inventor... it's actually pretty straightforward, I'm surprised). Googling the message gave very clear explanation of how to fix it: "Cause: Full version of Microsoft Excel is not installed. Solution: As mentioned in System Requirements of Inventor 2016 a full version of Excel is necessary for working with threads." Ugh. Luckily I can get it, but ... ugh ... poison!

Still waiting for my final marks... O_o

pheloniusfriar: (Default)
Well, I am finally back up to a basic operational level with Verilog coding... only to find out that the project I will be working with has been done in VHDL. While one language versus another is usually no big deal for me (okay, I hate C++, but I've been using it since the 80s when I worked on what was, at the time, the largest C++ project in the world and I'm good at it, but that doesn't mean I have to like it), VHDL has its roots in the Ada programming language D:. Why the grumblings? Ada was developed by the U.S. Department of Defense and is one of the most notorious Bondage and Discipline languages in existence. While B&D might be fine, it needs to be consensual and nobody ever asked me if I wanted the flagellations VHDL/Ada will entail ;).

It does remind me of an interesting side path in processor technology development from way back when... the Intel iAPX 432 system. By any measure, this was a complete failure for Intel (and the industry as a whole), but it introduced a number of features that we now see in most modern processors. I'm not going to go into it here, except to say that it supported object oriented data access and security control models at the hardware level, supported explicit hardware fault tolerance and multi-dimensional data busses, had superscalar processing elements, and so many other features that were too far ahead of their time (and thus made the system intolerably slow and cumbersome, and thus uncompetitive). I remember that the instruction set was actually a bitstream read into the processor in 32-bit chunks and parsed, and that instructions could be anywhere from 4 to 111 bits in length! It really was an engineering masterpiece, but I often mused that the people that worked on it must have been locked up in the loony bin afterward en masse (I think one of them went on to be CEO of Intel or something... maybe the same thing? Heh). Anyway, why I bring this up is the 432 was never meant to be programmed in assembly language or even "system" languages like C, but rather was designed such that Ada was essentially its assembly language. Perhaps that is another reason (maybe even moreso) for its demise ;). Sadly, VHDL is widely used in the electronics design sector, so it was inevitable that Ada would eventually catch up with me... I took two textbooks on VHDL out of the Carleton library on Wednesday and have started reading them. I am determined to progress, if equally resigned to my fate.

I'll make sure to leave a tube of lube on my desk as I work... it might make the proceedings a little more comfortable to me ;).

On a completely separate note, I am currently listening the heck out of Floex's album "Zorya" (Floex is the project of Czech composer, musician, artist, producer, etc. Tomáš Dvořák, who also did the gorgeous soundtracks for the delightful games Machinarium, which is where I first heard his work, and Samorost). The music on this album successfully pulls from so many different styles: prog, classical, industrial, pop, etc. and puts them together into what I find a very pleasing whole, blending acoustic instruments/sounds with synthesizers and samples. In particular, on one track (Forget-Me-Not), he plays piano and a clarinet without a mouthpiece that I can listen to over and over again... melancholy and evocative, it really floats my boat right now. The clarinet played like a trumpet has a very distinctive sound (to say the least) that makes that song stand out for me. There are many different moods throughout the album and even within the songs that keeps it interesting all the way through. It also features the best Yes song not by Yes I've heard in a while ("Precious Creature" featuring the vocals of James Rone), heh. You can listen to it free on his Soundcloud page (or listen to and potentially buy it at his Bandcamp site):

(the Soundcloud page is nice because it talks about some of the intruments and credits the other people that performed on the album... just click on "Show more...")

Edit: I started reading Douglas L. Perry's book "VHDL Programming by Example", Fourth Edition, McGraw-Hill, ISBN 0-07-140070-2. It bills itself as "the hands-down favourite user's guide to VHDL"; but good lard, what a disaster! I made it as far as page 4 and had already picked out typos and plain wrong information... the cruftiness was continued on page 5... and I have given up and tossed the book aside (gently, it's a library book). On page 4, they refer to the "counter device described earlier", but the only thing described earlier is a multiplexer (there's nothing else earlier in the book, this is the start of the book!). On page 5, it reprints a fragment of code from page 4 and says it's from the "architecture behave" code, but the code it is referring to is clearly "architecture dataflow". What a crock of shit (and it hasn't helped my opinion of VHDL any either, I might add, ugh). There does not appear to be any errata for this hunka-hunka-burning-turds. Let's try the other book I got from the library, sigh.

Second edit: I am now digging in to Peter J. Ashenden's book "The Student's Guide to VHDL", Second Edition, Morgan Kaufmann, ISBN 978-1-55860-865-8. I now know one of the reasons why VHDL has always seemed somehow wrong to me (while Verilog seemed a sensible approach in contrast). Quoting from the Preface: one pervasive theme running through the presentation of this book is that modeling a system using a hardware description language is essentially a software design exercise. And there you have it... VHDL became popular because it views hardware design as an exercise in software design. Since there are so many programmers in the world (thousands per hardware designer), it is a seductive statement that anyone who can write a program can design an integrated circuit. However, that is like saying that giving someone a Digital Audio Workstation (DAW) will allow them to effectively compose good music. Yeah, anyone using looping composition software can create something quite pleasing and often interesting, but it is not crafted in a way a trained composer or musician can do it. It is also much harder to innovate (music or hardware design) without deep training in the art in question. I find it interesting as well to note that there are thousands of folk musicians for every trained musician (I consider most rock, etc. kinds of folk music... and before you think I'm being snooty, I only do "folk" music myself in one form or another, and I have no pretentions about where my music fits into the spectrum of musical sophistication). I have done both serious hardware and software development (and I can do me a mess o' software... I've been programming complex and sometimes mission critical software for decades and I'm very good at it), but the two skillsets are radically different. Anyone with a VHDL compiler in one hand and an FPGA in the other can probably get something to work that'll do the job, but it is going to be sub-optimal in many potentially important ways (if not subtly buggy). This explains a lot of what I have seen lately with a number of ASICs, hmmm. The medium (VHDL) is the message.

I am starting to regret that I wasn't telling the truth when I said I'd keep lube by my side as I worked...
pheloniusfriar: (Default)
I have had a special fondness for the FORTH language since I first encountered it many, many years ago. There is an elegance, and dare I say beauty, to it that other computer languages lack. It was its own operating system, integrated development environment, and programming runtime from the early 1970s and a powerful paradigm for software development in the 1980s with the advent of microcomputers (I did one actual application in it professionally, but used it a lot to play with different ideas I had on my own as well). There is something about the way of approching application development with FORTH that is different than pretty much any other language: that you basically write a new language, based in FORTH, for the application rather than a series of functions or objects to implement the needed funtionality to support the application. It's a fundamentally different way of approaching problems, and one that I find very satisfying. The only other language that comes close to it for me is Tcl (the Tool Command Language, pronounced "tickle"), which is an odd duck of a computer language as well (and one in which I have developed some very sophisticated application software for "electronic data interchange" and such), but that's another story.

One of the things that differentiated FORTH as well is a famous book on the language called "Starting FORTH" by Leo Brodie. It was illustrated with little cartoons to act as mnemonics to learning and covered everything one needed to know to get going and even get good with FORTH, along with a bunch of "under the hood" stuff for those doing serious programming. I think I was mumbling something about it to a friend and I was thinking... I wonder if that book is online anywhere? Sure enough, it is, but it is in a form that is a little challenging to try to read (it was only allowed in HTML, a PDF was not permitted by the copyright holders). I had been hoping to show my friend a little bit about it using the fun (but in depth) introduction done by Brodie, but I figured if I recoiled from the online version, my friend (who has only done a bit of programming) would be flummoxed by it entirely. The FORTH user's group or fan pages or whatever they were, said that "if you ever see a copy of Brodie's book for a reasonable price, buy it! They can be rare and expensive!". So, at the suggestion of my friend, I went online and did find a copy of the book for a very reasonable price, and went ahead and ordered it. It arrived a few days ago and I have been casually reading through it. I am falling in love with FORTH all over again, but also realize the serious limitations it had back then (the book was based on the FORTH-79 standardization, and there have been two rounds since then, with the latest in 1994, called "ANS FORTH"... so modern features have been added for sure).

Anyway, I just wanted to share one "adorable" quote from the book... a true time capsule of the state of the computing world back in 1981 (at least the microcomputer computing world):

Disk memory is divided into units called "blocks". Many professional FORTH development systems have 500 blocks available (250 from each disk drive). Each block holds 1,024 characters of source text. The 1,024 characters are divided for display into 16 lines of 64 characters each, to fit conveniently on your terminal screen.

Awwww... so cuuute! :)
pheloniusfriar: (Default)
I remember using LaTeX at a job I had back in 1987. The “LaTeX User’s Guide and Reference Manual” by Leslie Lamport was first published in 1985, but the version I have was published in 1986 and describes LaTeX version 2.09. What amazes me is I’m still using LaTeX in my professional and personal work. I prefer it to most WYSIWYG packages for technical work, and by its fundamental paradigm of operation allows me to focus on my writing (i.e. content) rather than fighting with formatting the document as I go (it is document mark-up where the meaning of the content of the document is specified rather than the formatting to use for it... and the formatting is taken care of at the end by computers, which are pretty good at doing it right). What amazes me even more is I continue to discover new extremely professional packages available for it that I never knew existed before I needed a feature and went looking for it.

Right now, I was getting tired of drawing digital logic timing diagrams by hand (this is in the early design phase, so doing circuits in a proper design tool and using their simulation output isn’t really an option... and besides, the output usually looks like shit in a document). I went and found a package called “tikz-timing” that does a very nice job of rendering timing diagrams and it has already helped me to wrap my brain around the quite tricky design I am attempting. The tikz-timing package is built on top of something called “TikZ”, which itself is built on the “PGF” package (the two are bundled together to install). I had to install TikZ/PGF... it was pretty huge, and I decided to look at the documentation that came with it to find out what it did. Consider my mind officially blown! The user’s manual for the package is 1200 pages of condensed and to-the-point (although adequate) descriptions of its capabilities and use! Ultimately, it’s kind of an expert drawing system, and I am sure I recognize the style of some of what it can do in many of the professionally published books and articles I have read. Just scrolling through the documentation is breathtaking (if you love beautifully typeset figures, which I do). Rather than describing what it can do, it might be better to say that I’m not sure if there isn’t anything it can’t do, and do beautifully with minimal guidance by the author (that it does a good job on its own of drawing node/edge type graphs is a feat in and of itself). This is definitely a package I need to learn (although it’s going to be piecemeal for the foreseeable future). The last package I just downloaded (I hope that’s it) is called “circuitikz” and is, as one might guess, a TikZ-based package for typesetting circuit diagrams. It’s non-intuitive to me as to how to specify the drawings, but once I figure it out, it will come quick I’m sure. The examples in the documentation are wondrous to look at, and again I am sure I recognize the style in professional texts I have read.

Two things, however, prompted me to post. Firstly, I decided that I need a break from Twitter... I still have my account (@PassionateFriar), but I stopped following anyone because I really need to focus on my school work and getting my life back as I approach the completion of my undergraduate degrees. Secondly, and back to LaTeX... argh! This highlights one of my gripes with open source software efforts: sometimes they are borked in subtle and not-so-subtle ways because of ideological currents. One particularly egregious example is the fact that there is no way to draw an arrow in the otherwise excellent GNU Image Manipulation Package (a package somewhat like Photoshop, but free) because of the developers’ preference for ideological purity over wasting hours upon hours for the tens of thousand of people who will inevitably have to draw arrows (you have to download expansion packages and install them to draw arrows, and it’s a non-intuitive process to do the installation and then figure out how to draw a freakin’ arrow). Well, with LaTeX, they went from the original (LaTeX 2.09 anyway, that is as far as I go back) command \documenttype (in which you could load the packages you needed), to a new command \documentstyle. Packages are now loaded with a new command called \usepackage. I certainly don’t mind the change per se, but if you use one type, LaTeX imposes a strict “flavour” selection on your document and you cannot mix the styles. Again, fair enough, but it took me ages to figure out what the heck was going on and why. It was trivial to fix (I’m using the new style going forward), and am able to take advantage of features of the new style for the work I am doing, but the error messages were obtuse. Even when I found a page that told me what was going on, it was a reference mention rather than an explanation and I had to figure out on my own what it meant. Ultimately, I had to try out the different options to figure out how to move forward. Anyway, ugh.

I will finish off with the awe that I feel that LaTeX is a piece of software that is over 30 years old and it still an essential tool for the communication of scientific, mathematical, and engineering (at least theoretical) information. I happen to think that it enhances creativity by allowing one to focus on writing rather than formatting, and that word-processor type software (M$ Werd, Libre-Office, etc.) detract and distract from the process (although they’re great for writing quick letters and stuff, but are a pain for serious work). What is even more awe-inspiring is that LaTeX is built on top of TeX, which was a typesetting program written by Donald Knuth in 1978, almost 40 years ago! Software doesn’t last that long, it just doesn’t... and that this code withstood the tests of time attest to it’s exceptional nature. And if that wasn’t enough (quote is from here)... "Donald Knuth, rewards the first finder of each typo or computer program bug with a check based on the source and the age of the bug. Since his books go into numerous editions, he does have a chance to correct errors. Typos and other errors in books typically yield $2.56 each once a book is in print (pre-publication bounty-hunter photocopy editions are priced at $0.25 per), and program bugs rise by powers of 2 each year from $1.28 to a maximum of $327.68. Knuth’s name is so valued that very few of his checks – even the largest ones – are actually cashed, but are instead framed.” Cash bounties for bugs? That would bankrupt most multinational companies in a matter of weeks (if not minutes). It is also known that in 40 years, Knuth has spent very little money... TeX is just that good. Well, back to work!
pheloniusfriar: (Default)
This post is going to be horribly specific, but it appears to contain information not available anywhere else on the web I could find. If nothing else, it's a note-to-self, but if someone else is looking for how to do this, it can save a lot of time. As the title says, I needed to install Mentor Graphics' program "HDL Designer" (an ASIC and FPGA development tool) on a laptop running CentOS 7. I had previously installed it on a Scientific Linux 6 system, but could not find my notes :(... thus another reason for this post.

The long and short of it is the installer and the program itself needs a bunch of 32-bit libraries that are not usually installed on 64-bit Linux systems. When running the installer, "HDS_2015.1_ixl.exe" (yes, it's a Linux binary executable), I got the message that it seemed to be missing the "java.awt.Toolkit"; however, this was highly misleading... The trick to figuring it all out, is the installation binary created an installation directory in root's home directory (you need to install it as root if you want it system-wide, which I did): "/root/mgc/install.ixl". In this directory was a program called "install" (gasp). When that program was run directly (again, as root), the various missing libraries could be discerned.

So here are the steps I used (starting from my regular user account):
  • chmod u+x ~/Downloads/HDS_2015.1_ixl.exe
  • sudo su - [to run as root]
  • ~<my_username>/Downloads/HDS_2015.1_ixl.exe
  • cd /root/mgc/install.ixl
  • ./install [repeatedly until it stopped complaining]

In another window, I installed the packages to get past each subsequent missing library:
  • yum -y install glibc-devel.i686
  • yum -y install libXext-devel.i686
  • yum -y install libXrender-devel.i686
  • yum -y install libXtst-devel.i686
  • yum -y install libgcc.i686

Once the complaints had stopped, the GUI for the /root/mgc/install.ixl/install program popped up; however, it needed a lot of esoteric information to proceed. So, the right thing to do here is to actually "Exit" the "install" program! Then, go and run the original binary again: "~/Downloads/HDS_2015.1_ixl.exe". This will bring up its own GUI now and will allow you to very easily finish the installation. This same procedure would work the same for Mentor Graphics' "HDL Author" as well.

But... that's not enough libraries to actually run "HDL Designer"... so, as your regular user (not root), run the actual program to find out what other libraries are missing. On my system, I wanted it in "/opt", so I ran "/opt/HDS_2015.1/bin/hds" to find the missing libraries, and in another terminal as root I ran "yum" as needed. In this case, it only consisted of the following:
  • yum -y install zlib-devel.i686

And, that got it running... kind of... it immediately started to complain about a worrying lack of fonts. The solution to this was to install the 75dpi ISO8859-1 X11 fonts (the 100dpi fonts were already installed):
  • yum -y install xorg-x11-fonts-ISO8859-1-75dpi

And then it came up as clappy as a ham...

Given that I seem to be away from my desk more than at my desk these days, it makes sense for me to have it on my laptop, rather than on my desktop. As for setting up licenses (we get ours through CMC Microsystems since we are a member educational institution), you are on your own... that's a whole other kettle of very bitey fish.

Here is a very "computer" music video to go along with this post...

pheloniusfriar: (Default)
Almost two years ago, I posted that I had ordered a model 7Ci tablet from Datawind in Canada (aka UbiSlate, aka Aakash) as part of an experiment to see if such insanely low cost products were any good at all, and where/how they might be used. I posted about ordering an UbiSlate 7Ci $37.99 Canadian tablet (I gave specs for it, and specs for both the the $79.99 Canadian UbiSlate 7C+ EDGE device and the $129.99 Canadian UbiSlate 3G7 full 3G phablet) here, back in the summer of 2014. I posted about receiving the 7Ci in this post, and promised I would write a proper review. The fact that I'm just getting around to doing so says a considerable amount about the state of my existence since then. What the older posts don't say, is that after I played around with the 7Ci, I purchased a second one and gave one to each Happy and Beep, and ordered myself a 3G7 to try out (the promise of 3G data access for such a low hardware cost was quite attractive). Based on my experiences with the 3G7, a friend also ordered one for themselves; and I ordered another 7Ci as a gift for another friend.

Below, the 7Ci (in its $15 keyboard/case) that it all started with...

If you go to the links for each of the products above, a few things have definitely changed. First and foremost, the cost (in Canadian dollars) has gone up for the 7Ci (now $47.99, still crazy inexpensive); and down for the 7C+ (now $62.99) and the 3G7 (now $99.99). A few other subtleties are in the specs that were not there two years ago: specifically, the 7Ci now says it comes with a mini-HDMI port (the ones I ordered definitely did not have that), and the 3G7 says it has "Wireless Headset Support" (presumably they added a Bluetooth compatible wireless interface... it costs tens of thousands of dollars and up to use the word "Bluetooth" since it is an industry trademark). They have also (since I last checked a couple of months ago) added two new products, the UbiSlate NS-7 ($149.99 Canadian) and the UbiSlate NS-10 ($175.99 Canadian). The NS-7 supports 3G wireless and the NS-10 just has wi-fi, but both are definitely "beefed up" machines. Specifically, they have 2GB of RAM (vs. 512MB on the older units) and 16GB of Flash (vs. 4GB on the older units). The 7" NS-7 has higher resolution (1280x800) than the 7" 3G7 (which has higher resolution than the 7Ci) and the 10" NS-10 has 2048x1536 resolution. Both have Bluetooth™ and GPS support (which is pretty funkadelic). The NS-7 has an octal core 1.5GHz CPU and says it supports HD video playback (which the 3G7 also says it does, but the 7Ci does not say it does); and the NS-10 has a quad core 1.6GHz CPU and does not say it supports HD video playback (which is odd to me, but since the 7Ci does a perfect job of displaying HD video, I have no reason to expect anything different from this device). Near as I can tell, the NS-7 is an amped-up 3G7 class machine and the NS-10 is an amped-up 7Ci class machine. [Not to be confused with NS-13, which is an entirely different thing in the Kingdom of Loathing game, heh]

So... the verdict? Well, as is evident from having sent a lot of business their way, I thought the devices were well worth their cost, and then some. I did mumble a little bit in the post I made after getting the first 7Ci about the case feeling a little rough around the edges (literally... again, not enough to injure or anything, just unrefined) and that was the case (pardon the pun) with all the '7Ci's I ordered. The 3G7 case was a different story and was smooth all the way around, and had a much more sophisticated feel to it. Both the 7Ci and the 3G7 have gorgeous displays and can play HD videos (720p or even 1080p [obviously scaled by the tablet to fit]) flawlessly both from files stored on local or expanded Flash memory (I got 32GB micro SD Flash Cards for all, they worked like a charm), or streaming via wi-fi from my fileserver (yes, I have a fileserver in my house) or the Internet. The touch screens have always worked really well, and the audio quality over a set of headphones is excellent (I've used both ear buds of several sorts, and a set of professional monitor headphones even). The audio out of the little monophonic speaker on the back is not so good... it's functional if needed, and is loud enough to hear kind of okay, and doesn't sound terrible, but it is directed away from the screen and if I use it, I find I need to cup my hand to direct the sound back at my head, or use some sort of flat surface to reflect the sound back at me. Not a good design decision there, but by no means a deal breaker (that the headphone audio sounds good is quite enough for me, I've had a lot of computer systems that had shitty audio no matter what I tried).

If I had to say what I thought was the UbiSlate's top feature, it would have to be the displays, and I have heard similar comments from the others I know who have used them. On the other hand, if I had to pick one think about them that was a failure, it would have to be the amount of RAM: 512MB is just not enough to run a lot of modern applications (e.g. Terra Battle just dies a horrible memory-starved death after a certain level), including (in too many cases) accessing some web sites with a web browser (e.g. media-rich sites with lots of JavaScript cause Chrome ands its ilk to just bail out trying). This RAM size restriction alone seriously limits what sort of things can be done with these devices, but if you can live within those bounds, the things it does well, it does very well. As a side note, I actually went out and found information on the type of CPU they use (not an easy task, fyi) and found out that the 512MB limit is a hard limit on the chip itself, not a design/marketing decision by Datawind (I had hoped to expand the memory myself to 1GB at least, but learned it would be utterly useless since the CPU wouldn't be able to access it... so no "hacker" points on that one).

I have a few more short (negative) notes on the hardware itself before moving on. Besides the issue with the speaker placement, one other industrial design issue came up: on the 3G7 I have, it is not possible to plug in both the mini-USB connector (for the external keyboard, for instance) and the external power supply adapter... the ports are just too close to each other for it to work. This is a huge deal for one of the uses I wanted to put the tablet to: taking notes at school. The battery only lasts less than 3 hours on my 3G7 with wireless enabled, which is not enough to make it useful in that context... I had initially planned to plug in while in class and typing on the keyboard, but that was not possible. The keyboard itself is usable to type on (I'm used to typing on a little Acer netbook computer, so the key size isn't unsurmountable), but I found that if I left it plugged in to the mini-USB port, it would drain the battery of the UbiSlate even when the tablet was off. Another issue I had regarding power was that if the battery was near dead and I did plug in the USB or external power supply to charge it, the unit would still run out of juice and shut down. Whut? Yup. Apparently the power/charging circuit was not designed properly to both fully power the unit and charge the battery. Definitely a problem, but it has not been an issue too many times (once I knew the problem existed)... if this was my only computing device, it would probably be a much bigger deal. Another pure fail was the power adapter that came with my 3G7 (some of the '7Ci's came with external adapters, some didn't... I'm not quite sure why): the plug on the adapter broke after a few months. I stripped the wires down and tried to repair it, and it worked for a while, but died soon after. I pulled the plug completely apart and saw that it failed because of a weak mechanical connection between the wires and the plug tip that would be extremely difficult to repair myself (I could do it, but what a pain in the ass, and it would probably just break again because there was inadequate strain relief). Just shoddy construction or weak design, at least in the one I had (my friend's adapter is still going strong... I do know that I'm pretty hard on equipment at the best of times though). I just ordered a replacement from China for $10 Canadian, which is one of the things that prompted this post (Datawind Canada doesn't seem to offer it for ordering, a marketing flaw from my perspective). Lastly, and this is probably something more specific to my use of it, I have torn the mini-USB connector off the tablet's motherboard more than once! Again, because of the power issue and the relatively short battery life, and the broken external adapter, I had taken to using it while it was plugged in via the mini-USB port to make it last longer. It is a small surface-mount connector and apparently relatively delicate. It should have been anchored to the tablet's motherboard with strong solder connections through the printed circuit, but I apparently tore it loose from its moorings. A friend repaired it for me (he's a surface-mount assembly master-craftsperson), but it tore loose again. I fixed it myself this last time (just a couple of weeks ago), but am not going to use it while it's plugged in anymore (well, at least until I get my new external adapter, heh).

So... definitely a few issues, but the question then becomes: what is it good for? I have been using my 3G7 on a nearly constant basis (every couple of days at least, sometimes more) since I got it. Beep uses it at about the same frequency as I do. I should mention that both Beep and I have laptops and access to desktop computers in the house, so the UbiSlate tablets definitely have a place in our larger computing infrastructure (and before it sounds like anything too classy, much of said "infrastructure" is beyond lagging-edge technology... some quite long in the tooth, and much of it salvaged and repurposed... but it does the job I keep it around for). Beep says she uses her tablet to watch YouTube videos mostly (she follows quite a number of Let's Players and other YouTubers), but does read online comics and stuff as well... so mostly Internet type stuff when the laptop is too bulky (again, the display and headphone audio is superb, and so is the wi-fi, so it's great for that). I use it to watch videos as well (music videos, and the videos from online courses like Coursera or edX, for instance), but most of the time I spend on it is to read PDFs for classes. One thing that works great is to set it up to my left on my desk and use it to read articles for class while typing notes on the desktop computer in my room (kind of a poor-man's dual-monitor sort of thing). I definitely do some web surfing (it mostly works most of the time), and sometimes watch YouTube videos (with the Android app, it's not so good with web browsing to them), I used it to play online games while I was sick for much of this year (e.g. Kingdom of Loathing... link above... it even runs X-Plane for Android without any lag or anything). I've also used it to read books and such. The friend I gave the 7Ci to used it for a long time to carry technical documentation around with him into areas that didn't have computer access or wi-fi, but he still uses it from time to time. He is going in for surgery soon, and plans to bring the tablet in with him to watch YouTube videos while he recovers, and maybe read some online books. On the flip side, Happy never really latched onto using a tablet... she either uses her laptop or a desktop computer, or more recently, her smartphone (a data plan is a very recent addition to her life, so that wasn't the reason). Furthermore, the friend who also bought a 3G7 loaded it up with applications and quickly brought it to its knees with a plethora of network-attached apps all running at once (their main previous experience had been with iPhones, which is definitely a different kettle of fish). I helped bring it back under control, but she continues to find it hard to use and, as such, has shied away from it. I am thinking it has something to do with Android and some of the DIY flavour of those class of devices (at least when they're not ultra-integrated from a top-tier systems provider, e.g. LG or Samsung), since she seems to have many of the same complaints with the behaviours of Android phones. She also seems to favour the use of her laptop, and TV type watching using a desktop system in her living room, but most of everything she does from a computing and networking (e.g. email, apps, etc.) is through her smartphone. In both cases, it's hard to point at specific shortcomings of the UbiSlate devices, and it seems to fall more into a personal style sort of thing.

All the UbiSlate devices have Google Play on them, so you can get any app they have. I get a lot of mileage out of Acrobat Reader, the YouTube app, ConnectBot (an SSH client), RealCalc (a powerful calculator program), and an amazing program called ES File Explorer (which I use all the time). An aside on ES File Explorer, it allows me to connect my tablet to my Linux fileserver using Samba and can also, of course, access my local files on the tablet's Flash storage. It has a built-in music player and will launch the appropriate application to handle any other files (e.g. PDF or MP4, for instance). It also allows for automated connection to cloud servers, but I don't use that particular feature. Anyway, amazing integrated, easy-to-use program! On the minus side of things, the 3G7 ran the first 20 or so levels of Terra Battle (a very fun and engaging game from Japan), but it ran out of needed RAM to load levels after that... and I have not been able to continue playing :(. I have been able to play Kingdom of Loathing (a game I've been playing for nearly six years) in a web browser on it with no issues. Another note: the built-in web browser is too ancient to be of any use anymore, and you need to install Firefox or Chrome or something (I ran Pale Moon on it until they announced they were not supporting some of the systems I run anymore, so I stopped using it everywhere). One of the big discoveries/surprises is that it came with Kingsoft Office (aka WPS) loaded onto it. I have to admit to being shocked at how amazing this office suite was on a mobile device. Firstly, it really is tailored for use on mobile devices, you can integrate document storage and/or backup with the cloud storage provider of your choice (e.g. Dropbox, but Google Drive and others are also supported), and it does provide an all-in-one office suite on the go (word processing, spreadsheet, presentation, a PDF viewer/editor [!], a file manager, email integration, etc. ... wow). There is a desktop verson of it available as well and, although I haven't tried it myself, the mobile and desktop versions are supposed to integrate seamlessly through the cloud storage feature (allowing one to move between devices as the need or desire arises). Anyway, something worth checking out in general (I use LibreOffice for my desktop needs... it integrates with the Zotero citation manager, which is critical to me at this phase of my existence). A warning about the UbiSlate's software complement: it comes with their own patented web browswer that uses remote servers to actually render the page and then serve it up to the device. Near as I can tell, this is to allow them to insert their own advertising streams into content from other web sites now, but I understand that the original idea was to be able to use powerful servers to render pages for under-powered but insanely cheap tablets that were being (almost) given away in Asia to university students that needed them. It was a good idea, but it doesn't work with modern dynamic and interactive web content... avoid this program! The UbiSlates also came loaded with all manner of adware and bloatware (and cheezy educational software) that would probably be worth your while to delete the hell out of. I have seen some little adverts on the platform even after my rigorous cleaning, but only when I've paused a video or something, which is perfectly acceptable to me (I think I've even maybe clicked on one, it was interesting enough, heh). All the little adverts have been appropriate for all ages so far, which is also a plus (I've read some reviews that excoriated the UbiSlates for adware, but that has not been my experience). Anyway, there's a lot of very popular apps that are way, way, way worse than anything I've seen on my tablet ;).

And then to explore one last feature... amongst the main reasons why I got the 3G7 was to explore the use of 3G data from a tablet platform. It wasn't until late last year that I finally got around to sorting through that. I have a smartphone with data, etc. and went in to inquire about what it would take to get my tablet added to my plan. Well, they had a plan for $5 a month, but that only included 10MB of data... enough to do a bit of email or use an SSH client as needed, but an amount that would quickly run out. It turns out that I had another need that came up since I got the 3G7, and that was to have access to SMS messaging (text messaging) rather than any data or calling ability. When I went to the kiosk in the mall (I'm with Virgin Mobile Canada), they told me that there was no way to get my tablet added with free texting (I would have to pay something like $0.10 a message, yikes!). I called up their customer support line and talked to someone there... they had to do some research and ask around, but they were able to offer me an unlimited text messaging add-on to the 10MB tablet data plan for $10 a month. Suweeet! There was a little bit of awkwardness at the kiosk when I went back to get a SIM (they tried the wrong sized card and it got jammed, but I was able to pull apart the tablet and get it out so the correct one could be put in... no damage done, fyi), and after a bit of back and forth with headquarters, they got the data and text plan up and running for me. I'm going to be moving the SIM to a custom Arduino-based system I'm working on and will be using it for more experimentation, but I did want to report that my 3G7 works like a charm with 3G and a well-known cell phone service provider in Canada.

To close, overall I would call my purchases of the UbiSlates a great success, and despite the several issues I talked about, they are very capable devices for their price. In fact, the lack of RAM was the only issue that proved truly limiting, but it certainly did not render them useless by any stretch of the imagination. If you're looking to get a tablet to do the sorts of things I said it's good at doing, any of the 7Ci/7C+/3G7 devices would be adequate. If you're looking for an inexpensive device to watch videos on or read PDFs or digital books, then it's hard to compete with for the cost! On the other hand, if you're looking for a more capable system, but still at a bargain basement price, you might want to consider the NS-7 or NS-10 depending on whether you need 3G connectivity or not (or whether you want the bigger screen or not). I must say that I'm eyeing the NS-7 as a possible step up from the 3G7 as it addresses the only real concern (the amount of RAM) I had with the earlier devices, but isn't going to cost $600 like an iPad. If you're uncomfortable wrangling Android smartphones into a state that works for you or are leary about deleting and installing apps and configuring them to your needs, then perhaps an iDevice from Apple is more your speed (if you have the $$$$$$) or something in a highly integrated Android device from a major smartphone/tablet vendor (if you have the $$$). For me, a little $ and a bit of effort paid off bigtime, but your mileage may vary ;).

Hmmm... what to use as a reward for reading this far (or at least putting the effort in to scroll down, heh)? How about something that will blow your freakin' mind!? This is a dance performance, but it's like nothing I've seen before (okay, I've seen bits and pieces, but put together like this, uh uh). If you liked The Matrix, you'll particularly enjoy this one. Wow. Just wow!

pheloniusfriar: (Default)
More on this later, but I'm about to launch a Kickstarter (with Jeff Green) to "port" the groundbreaking (and award-winning) CD-ROM title "Midnight Stranger" from its 1995 technological roots (Macromedia Director running on Windows 95... it won't run on anything newer than Windows/XP without an emulator) onto modern systems, specifically anything that can render HTML5 canvases and video. This is the first semi-public (my blog is an Internet backwater that nobody seems to notice, heh) unveiling of the proof-of-concept I wrote to prove it could be done. At the time Midnight Stranger was released, DVDs didn't exist and with the technology at the time you could only fit around 6 minutes of crappy video onto a CD-ROM... hardly enough for an immersive or engaging experience. The solution is a bit cheesy, but the limitations are soon forgotten (from my experience and watching others): instead of full-frame video, a background image is used and little videos overtop that background image are used for people's heads or other movements. It is quite funny looking now, but over an hour of video could then be put on a single CD-ROM and allow for the telling of complex stories with the very limited computer capabilities at the time. While we're way past this now, it it still engaging on the desktop, and the format works for mobile devices because the bandwidth required is so, so much less than doing full-frame video. Where it really shines is in the use of a novel interface called the Mood Bar, where you respond emotionally rather than analytically.

I think this recollection of Jeff's sums it up best: When this production was first premiered at Macworld in San Fransisco, despite appearing there in a 4-foot booth with a single 18-inch poster and presented on the smallest desktop Macintosh of the time with discount bin headphones, and situated only forty feet from a million-dollar Sony booth with 30-foot screens and live actors, Midnight Stranger consistently had large crowds and a 30-minute wait-time for just a few minutes of interaction. It became clear that for those for whom this form was well-suited, it gave the opportunity to achieve a significant sense of virtuality; the combination of eye contact, immersive sound, and the onscreen person‘s apparent response to their 'input' being sufficient engagement to facilitate periods of true suspension of disbelief — the holy grail of media.

As it stands, my demo is only two levels deep and it just repeats (and yes, it's a partial scene pulled from Midnight Stranger). It starts with the background image and you need to make a Mood Bar selection. That launches a video. If you choose wisely, you will be presented with another Mood Bar choice (if you don't choose wisely, you wind up back at the picture and can try again). The second mood bar has 3 choices, a movie runs, and then you wind up back at the picture and can do it all again. The menu and help buttons "work" now (they just report being pressed for now... any functionality can be added later). For me, it worked on Firefox, on Pale Moon (a fork of Firefox), and on Chrome. It did not work on my old LG mobile phone or the default browser for my Ubislate 3G7 (ultra cheap) tablet, but the video did work on the Pale Moon for Android browser on said-same Ubislate (I'm not sure if the audio was working, so that might be a problem). If you get a chance to try it, please let me know how it works out for you (any information on what operating system and/or device and/or browser you were using, and how it worked or didn't, would be really helpful). More work will continue to be done to get it working on more devices, but this was enough for us to be confident we could do it. Fyi, my previous post was me digging into ways of making it run on more systems... the initial proof-of-concept was written months ago when we were first wondering whether it could be done. At the time, it would only run on Chrome, but this latest (as stated above) runs on at least two browser families ;).

Click on the image to open the demo in a new tab:

Here's the Kickstarter info should you be so inclined to take a sneak-peek at what's coming up. I've been told the bio-video is particularly entertaining (somewhat at my expense, but... it's an interesting way to present some of my resume, heh):

Obeing Kickstarter Information
pheloniusfriar: (Default)
"The nice thing about standards is that you have so many to choose from."
— Andrew S. Tanenbaum, Computer Networks, 2nd ed., p. 254

So... I'm learning to write full distributed applications in HTML5, ECMAScript 5.1 (aka JavaScript), and CSS3 (I've done this sort of stuff in system programming languages, but not in this programming environment). One would think that the adoption of global standards would lead to some facility of use or at least the availability of well-structured documentation (yes, I can hear anyone who has ever done any serious programming laughing right now, I know I am laughing at myself for that silly statement). My albatross-du-jour is I'm struggling with HTML5 video. I finally got ffmpeg compiled with all the bits I needed for all the different major formats (MP4, WebM, Ogg)... and again, anyone who has ever even brushed up against this stuff is probably at least smirking at me (grimacing perhaps with the memory). Alas, all three of those "video formats" are actually only generic container types that can hold all manner of encodings of video and audio that can all be mutually exclusive — just because something says it can play MP4 files, doesn't mean it will play my MP4 files. The clearest explanation of this dog's breakfast can be found in Mark Pilgrim's excellent summary Dive Into HTML5: Video On The Web. It is also notable for making me feel like I might not be alone in my despair, and that there may be some justification for the head-scratching I've had to do, with his statement: "HTML5 defines a standard way to embed video in a web page, using a <video> element. Support for the <video> element is still evolving, which is a polite way of saying it doesn't work yet." Yeah. That. Definitely.

For anyone new to this bizarre realm that powers most of our planet's economy and society today, web pages are dynamic entities whose content is specified with HTML5, whose presentation is specified by CSS, and whose interactivity is controlled by ECMAScript. ECMAScript interfaces to the contents of the web pages being displayed through the browser's Data Object Model (DOM). Like all the other technologies discussed, there are many evolutions of this interface as well. For HTML5 video, because there were too many commercial and/or ideological interests (and even technical issues that prevent a "one size fits all" approach), there have been a proliferation of "standard" video formats available to HTML5-compliant browsers. How was this done? It happened by providing nebulous mechanisms that either let the browser pick from a number of different available video formats (it selects the one it likes the most), or let the ECMAScript determine what formats are supported by the browser it's running on, and letting the code pick which one it wants (which works well, until it gets its guess wrong, right?). The W3C then called this horrible idea "the standard" and has moved on to let the world sort it out on its own. Can't blame 'em, but it sucks for anyone who wants to serve up video to more than one kind of web browser. If nothing else, a provider of video content has to encode the video into multiple formats and support them all so the most number of people won't notice how broken this idea is. Again, in my case, that means I have to encode every video to be served up into MP4, WebM, and Ogg. Every single one. Sigh. Then, the code needs to pick one of them to tell the browser to display, and here things get murky beyond belief.

There are any number of web pages and blog entries that provide "guidance" on how to deal with this situation. Almost to the one they deal with it using only "voodoo programming" techniques: "the practice of coding according to superstition, guesses, or anything other than logic. Voodoo programming is a rather broad term for situations where a programmer uses a piece of code without truly understanding how it works." This makes me stabby. I need to know what I'm doing, because invariably something will break and knowing why I used something earlier will usually point to the solution quickly, rather than letting me spend days or weeks bashing my head on my desk trying to solve the problem (pro tip: it feels so good when you stop). This is also what separates boffo programmers from lame coders: a programmer understands what they are doing and why. The DOM provides a method on the video element called canPlayType() — you give it a MIME type string and it tells you whether the browser thinks it can play the video or not. Sort of. "Common" types include (continuing with the examples already used): "video/mp4", "video/webm", and "video/ogg". If the browser is cool with a format, it returns "maybe" (literally, the string "maybe")... as in "maybe" I might be able to play this video, but I'm not really so sure. If it knows it can't play it, it returns a null string ("")... sigh (a bad programming practice in my mind, but it can be tested for at least). Can the browser play it? Dunno... only the person trying to watch the video will be able to answer that (presuming there are no network issues or other problems that might make them think the browser is the problem rather than the signal coming out of the back of their computer/phone/tablet/toaster). I should note, that the only way to find out the above was by hours and hours and searching and comparing bread-crumbs on dozens of web sites looking for information that agreed with any other web site (including reading through many long histories of the subject, which while interesting, were definite productivity sucks). The question then becomes, can we be more sure before we make a choice amongst the possible options for video playback? The answer is, somewhat (but only partially) fortunately, yes. You can also specify what codec support is required to play back the videos you've encoded in the string passed to the canPlayType() method of your video element. If you specify the MIME type and a list of codecs, the browser might respond with the string "probably" to your invocation of canPlayType() if it has those codecs identifiably available to it and installed. I would like to take a moment to emphasize that the "gold standard" of compatibility here is "probably", not "yes", not "barring civil uprisings, earthquakes, or volcanoes", not "yo momma", but "probably"... sigh. And to top it off, codec specification is where the voodoo completely takes over. For example, one of the main information sites suggests (for the three main mime types):
myVideo.canPlayType('video/ogg; codecs="theora, vorbis"');
myVideo.canPlayType('video/mp4; codecs="avc1.4D401E, mp4a.40.2"');
myVideo.canPlayType('video/webm; codecs="vp8.0, vorbis"');
Clear as mud right? Ogg makes at least a little bit of sense (if you somehow magically know the video format is theora and the audio format is vorbis), but why "vp8.0", and what the hell is "avc1.4D401E"??? Another site pointed me to the source code for the Clappr "extensible media player for the web", which contained the following vexing bit of code:
const MIMETYPES = {
'mp4': ["avc1.42E01E", "avc1.58A01E", "avc1.4D401E", "avc1.64001E", "mp4v.20.8",
  "mp4v.20.240", "mp4a.40.2"].map((codec) =>
  { return 'video/mp4; codecs="' + codec + ', mp4a.40.2"'}),
'ogg': ['video/ogg; codecs="theora, vorbis"', 'video/ogg; codecs="dirac"',
  'video/ogg; codecs="theora, speex"'],
'3gpp': ['video/3gpp; codecs="mp4v.20.8, samr"'],
'webm': ['video/webm; codecs="vp8, vorbis"'],
'mkv': ['video/x-matroska; codecs="theora, vorbis"'],
'm3u8': ['application/x-mpegurl']
MIMETYPES['3gp'] = MIMETYPES['3gpp']
Just sticking to the codecs part for the three main types I'm interested in, there are three "avc1" and three "mp4v" types... and WebM support is asking for the "vp8" codec (rather than the "vp8.0" codec specified by the previous site), Ogg also apparently supports something called "dirac" and there's another audio format called "speex". I know there's also a VP9 format that can be used with WebM as well (again, I've read a lot of web sites lately on this).

So, I'm thinking, maybe if I knew the exact codecs I used to encode the videos I'm using, I can just check for that and be done with this rubbish. Back to ffmpeg on Linux and let's give it a try... encode an Ogg video and I get:
Stream mapping:
  Stream #0:0 -> #0:0 (cinepak (native) -> theora (libtheora))
  Stream #0:1 -> #0:1 (pcm_u8 (native) -> vorbis (libvorbis))
Okay... I think I'm safe with
myVideo.canPlayType('video/ogg; codecs="theora, vorbis"');
for that container. How about WebM? Let's see, I get (letting ffmpeg pick the default codecs to use):
Stream mapping:
  Stream #0:0 -> #0:0 (cinepak (native) -> vp9 (libvpx-vp9))
  Stream #0:1 -> #0:1 (pcm_u8 (native) -> vorbis (libvorbis))
Ah, already something "interesting": I have been encoding the video using the VP9 codec, not the VP8 codec. I went to look to see what VLC (a media player for Windoze) thought the codec was, but... it crashed trying to open the file (it plays fine in my browser though). Sigh. The MediaInfo program on Windows claims the video is in "VP9" format, and uses the "V_VP9" codec; and the audio is in "Vorbis" format, and uses the "A_VORBIS" codec. So, I just learned that either I need to explicitly encode with ffmpeg in VP8 format to support older browsers, or check to see if the browser supports the VP9 video format! Going to look at the MP4 videos (sadly, as expected) was no help at sorting this all out... VLC stated that it used the following codecs:
Codec: H264 - MPEG-4 AVC (part 10) (avc1)
Codec: MPEG AAC Audio (mp4a)
and MediaInfo stated that I used the following encodings for video and audio respectively:
Format                      : AVC
Format/Info                 : Advanced Video Codec
Format profile              : High 4:4:4 Predictive@L1.1
Format                      : AAC
Format/Info                 : Advanced Audio Codec
Format profile              : LC
The fact I'm using the "High" profile for AVC rings some bells that not all portable devices (phones, etc. that I need to support) support that "profile" of AVC (even if they can play "MP4/AVC" videos)... I can't remember exactly where I saw that (or when), but I know I filed something away in my noodle about that. Looking at the Wikipedia page for "H.264" seems to back up that impression by stating that there are different support levels possible and that the one I'm using is a "top of the line" profile that might not be supported everywhere. Back to the drawing board on that one I guess too, but I suspect it'll take me a while to figure out.

Makes sense so far? Uh, huh... nope. Not for me either. The "Rosetta Stone" for me was an obscure answer to an obscure question on Stack Overflow. This in turn clarified that the magic value "avc1.4D401E" actually referred to the "H.264 Main Profile Level 3" profile of the AVC video specification (whatever that means... I'm working on it though...). The other hexadecimal gibberish given in the Clappr code was for other AVC profile values. The article also pointed me to an RFC of all insane places, where the encoding of this mysterious string that everybody uses is actually specified: RFC 6381 (see, in particular, section 3.3). And there's the answer to the question I set out to answer: where are these mysterious strings defined and why? It, unsurprisingly, just raises a rash of other questions and concerns of a more practical nature, and is going to require me to become much more fluent at wrangling ffmpeg to do my bidding, but it's at least a start. I'm off to sort out these and many more challenges (including finishing the painting of my kitchen, which I started far too long ago... pictures when it's done!). Oh, and ffmpeg stated, when it did its MP4 encoding for me:
[libx264 @ 0x67bcc0] profile High 4:4:4 Predictive, level 1.1, 4:4:4 8-bit
[libx264 @ 0x67bcc0] 264 - core 138 r2358 9e941d1 - H.264/MPEG-4 AVC codec -
Copyleft 2003-2013 - - options: cabac=1 ref=3 
deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 
me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 
chroma_qp_offset=4 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 
interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 
b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=10 scenecut=40
intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 
qpstep=4 ip_ratio=1.40 aq=1:1.00

Stream mapping:
  Stream #0:0 -> #0:0 (cinepak (native) -> h264 (libx264))
  Stream #0:1 -> #0:1 (pcm_u8 (native) -> aac (libfaac))
I followed the article's suggestion to try a program called mp4file and it gave me the following on one of the MP4 files I encoded with ffmpeg:
type avcC
  AVCProfileIndication = 244 (0xf4)
  profile_compatibility = 0 (0x00)
  AVCLevelIndication = 11 (0x0b)
type esds
  objectTypeId = 64 (0x40)
    info = <2 bytes>  15 08  |..|
which means my particular "profile" that I encoded to is "avc1.F4000B, mp4a.40.3" (see the article on how to decipher the codes)... which I have not seen on any of the web site I visited to date. Pale Moon (a Firefox fork), my usual browser, definitely can't play it, and Chrome says it can can't play it either (but it says it can "probably" play WebM with VP9/Vorbis, and Ogg with Theora/Vorbis... and btw, it can play the MP4 file, I tested it). Yeah, this is going to take a while.

p.s. Does anyone need to fix their national postal carrier in this digital age? Here's the hint on how to do it:

"Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway."
— Andrew S. Tanenbaum, Computer Networks, 4th ed., p. 91

p.p.s. My configuration line for ffmpeg is as follows (information that doesn't seem to be readily available anywhere else in a compact form either):
./configure --enable-shared --enable-gpl --enable-version3 --enable-nonfree \
   --enable-libmp3lame --enable-libvorbis --enable-libx264 \
   --enable-libxvid --enable-libfaac --enable-libvpx --enable-libtheora


Sep. 7th, 2015 05:22 pm
pheloniusfriar: (Default)
I know that everyone who reads my blog (I know of one occasional reader, but meh...) can't wait to hear about my technical work with computers! Right? ... What? Oh... now I'm sad ;). If nothing else, I fantasize about someone running across one of these posts and having it help them with something they're working on. Anyway, I got NTP (Network Time Protocol) working on my server (which is now a Stratum 3 system, which is plenty good for my purposes). All I really needed to do was to add the following entries into my "/etc/ntp.conf" file, and make "/etc/rc.d/rc.ntpd" executable (so it would run if the system was rebooted... I ran it manually to get it going without the need for a reboot):
server     # NRC stratum-2 service primary server
server # NRC stratum-2 service secondary server
Everything else in the config file (from Slackware 14.1) was able to stay "as was". I now get my time synchronization information directly from the NRC (Canadian National Research Council) Stratum 2 servers. My servers are now within a few milliseconds of the world standard time kept at the NRC using multiple atomic clocks.


Next NTP job is to allow the other systems in my house to synchronize to my server (so they will be Stratum 4 devices), but since the latency and jitter in my home LAN is low, they should be pretty tightly coupled to my server's time. I should probably add the "stratum 2" indicators to the configuration file for the NRC servers as well, just to make it explicit (but I don't think it matters much since they're the only servers I list).

P.S. If you're in Canada, the web site is:

Most countries presumably have their own time servers... all of them use Universal Coordinated Time (the time zone formerly known as Greenwich Mean Time).
pheloniusfriar: (Default)
I have been trying to get a work laptop to the state I need for the data acquisition development I want to do for the ATLAS ITk project. I have managed to get it to dual-boot between the Windows 8.1 it came with and CERN's flavour of CentOS 7 GNU/Linux (go ahead, ask me how that went... hint: <expletives deleted> UEFI!). That is going well, and I got the SCTDAQ software running finally (hint: it doesn't work with the current version of CERN's ROOT software tool suite, use version 5.34 or earlier).

The one main usability issue I still had was the touchpad on the laptop (it's an ASUS X555L series). When I type, I'm fairly interactive with the surface I'm working on, so the accursed touchpad would move the mouse cursor or sometimes even register a click (two touches in quick succession). Even the most basic attempts to use the computer resulted in utter frustration. I finally hooked up my wireless trackball to it and went to disable the touchpad in Gnome... <cue mocking laugher>... yeah, there is no way to apparently disable an input device selectively in Gnome. Not impressed. Some posts talked about a "checkbox" on a particular user interface, but because the touchpad is apparently recognized as a PS/2 mouse, Gnome doesn't think that it should make that offer to the user (it, like Windows apparently, think it knows better than you do what you actually want from your computer and "protects" you from having to make choices).

A lot of the forum entries I saw talked about a Synaptics driver for touchpads, but when I did a "synclient -l" the driver did not seem to be activated. So much for that route. There were a few other antique posts about how to disable touchpads in the older way of doing X11 configuration files, but they were of no help to my situation either. It was looking fairly dire until I ran across this post by "deadlycheese", which had the exact answer I needed!

HOWTO: auto-disable touchpad when mouse is plugged in

The first step was to find out what my touchpad was called. With my trackball plugged in, I got:
$ xinput
⎡ Virtual core pointer                    	id=2	[master pointer  (3)]
⎜   ↳ Virtual core XTEST pointer              	id=4	[slave  pointer  (2)]
⎜   ↳ PS/2 Logitech Wheel Mouse               	id=11	[slave  pointer  (2)]
⎜   ↳ Logitech Unifying Device. Wireless PID:1028	id=12	[slave  pointer  (2)]
⎣ Virtual core keyboard                   	id=3	[master keyboard (2)]
    ↳ Virtual core XTEST keyboard             	id=5	[slave  keyboard (3)]
    ↳ Power Button                            	id=6	[slave  keyboard (3)]
    ↳ Video Bus                               	id=7	[slave  keyboard (3)]
    ↳ Sleep Button                            	id=8	[slave  keyboard (3)]
    ↳ USB2.0 VGA UVC WebCam                   	id=9	[slave  keyboard (3)]
    ↳ AT Translated Set 2 keyboard            	id=10	[slave  keyboard (3)]
I was a bit confused by the fact that I seemed to have two "Logitech" devices, and presumed one was what the operating system saw the touchpad as. So, I unplugged my trackball device and then got:
$ xinput
⎡ Virtual core pointer                    	id=2	[master pointer  (3)]
⎜   ↳ Virtual core XTEST pointer              	id=4	[slave  pointer  (2)]
⎜   ↳ PS/2 Logitech Wheel Mouse               	id=11	[slave  pointer  (2)]
⎣ Virtual core keyboard                   	id=3	[master keyboard (2)]
    ↳ Virtual core XTEST keyboard             	id=5	[slave  keyboard (3)]
    ↳ Power Button                            	id=6	[slave  keyboard (3)]
    ↳ Video Bus                               	id=7	[slave  keyboard (3)]
    ↳ Sleep Button                            	id=8	[slave  keyboard (3)]
    ↳ USB2.0 VGA UVC WebCam                   	id=9	[slave  keyboard (3)]
    ↳ AT Translated Set 2 keyboard            	id=10	[slave  keyboard (3)]
Well, that seemed to indicate that the "PS/2 Logitech Wheel Mouse" device was probably my touchpad. So I tried the following command to see if it would disable my touchpad:
$ xinput --set-prop "PS/2 Logitech Wheel Mouse" "Device Enabled" 0
... and sure enough, success!!! I don't need to put an icepick through my work's computer anymore. Just to make sure it was reversible, I did:
$ xinput --set-prop "PS/2 Logitech Wheel Mouse" "Device Enabled" 1
and that worked as well!

This is good enough for me now, but I'm going to fart around with the automatic/hot-plug control of the touchpad as well. But not for a while. As is the case with these things, I am posting this here as both a reminder to myself and as a possible guide for someone else searching for a solution to this particularly gnarly problem. P.S. This has been me the past few weeks working on this laptop:


Apr. 24th, 2015 02:41 am
pheloniusfriar: (Default)
When I upgraded from Slackware 13.37 to Slackware 14.1, it also upgraded the Apache web server from 2.2 to 2.4... which broke my existing configurations. As a result, the images that I referenced in my posts here were not being served up and I haven't had time to deal with it until now. Apparently, there were incompatible configuration changes between the two versions so I had to figure out how to merge my configurations into the new configuration files... specifically, how to update the virtual hosting configurations. There is a good guide here on what needs to be done, but mostly I just had to uncomment the include for the vhosts configuration file and change the "Order allow,deny; Allow from all" directives in the <Directory "..."> blocks to the new style "Require all granted" form. There are probably more little "gifts" waiting for me, but I got that part running at least...


Mar. 25th, 2015 03:09 am
pheloniusfriar: (Default)
The days pass like seconds and the minutes become months, it has been a challenging time for me. I have so much to write about, but no time at all (I managed to visit SNOLAB 2km underground in Sudbury last Friday and will be presenting my work on the ATLAS detector upgrade at the Large Hadron Collider at the National Conference on Undergraduate Research (NCUR) in Spokane, Washington next month). I got home today and fell asleep from 2PM to 10PM and have been up since, just trying to catch up with myself and spend a few minutes with my own thoughts (and apparently my blog). I still need to finish posts about the International Astronautical Conference and my multiple trips to Germany. I am wondering if, at this point, they will never happen...

Here's a short video of what it's like to go to SNOLAB (hint: holy shit, mind officially blown for having been able to see it with my own eyes...):

The reason for my post is I've realized that I need to upgrade my Linux server from Slackware 13.37 to Slackware 14.1 as I've needed to install software that required more modern libraries. To that end, I just wanted to reproduce a post that I made in June 2011 on Livejournal that doesn't seem to be here. This is necessary to frame things for a coming post on what was required to do the upgrade and document any issues I encountered. Since this blog is search-indexed, hopefully it can help someone who is also trying to do cool things with their computers. Keep in mind this is a reposting of a historical entry from a few years ago. With that said, the server in question has been rock frickin' solid the whole time. I think I needed to reboot it once in that entire time because of some issue (it has been rebooted more than that because of power failures and deciding to move it, but only once because of a problem... at this point, 'uptime' says it has been up for 110 days now... since I moved it to the other side of the living room).

Going from stable hardware to a functional Internet server is not an instant process. For instance, deciding how to install the operating system and getting it to boot and how to partition the drive for data takes a lot of work — especially when "state of the art" is a moving target. When I last installed a system, the idea of trying to boot off a RAID 1 partition (mirrored disks... in case one disk dies, the exact same data is on the second one as well) was not possible. In my first post on the topic, I had been planning to have one non-mirrored partition on each of the two drives (for redundancy) that I would have had to manage manually so I could boot off either disk if the other failed. On my current server, I have a separate (non-mirrored) boot disk (it also had the operating system on it) and then a pair of disks in a RAID 1 configuration for my data. I learned, however, that LILO (the LInux LOader) could now boot a RAID 1 partition! Well, that was going to save me a lot of manual configuration and provide better data safety, so that sounded like a great idea. Right? I mean, right?

Well, I had already partitioned my hard disk as follows (sda and sdb were identically partitioned... and note in case you didn't know or are used to other Unices, Linux blocks are indicated as 1K, not 512 bytes):
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048      206847      102400   83  Linux
/dev/sda2          206848     8595455     4194304   82  Linux swap
/dev/sda3         8595456   176367615    83886080   fd  Linux raid autodetect
/dev/sda4       176367616  1953525167   888578776   fd  Linux raid autodetect
Where sda1/sdb1 [100MiB] was going to be where I stored the operating system image to boot off of (manually placing copies on each filesystem and installing the LILO bootloader individually on each disk's Master Boot Record (MBR)) and mounted as /boot once the system was running, sda2/sdb2 [4GiB] would be non-mirrored swap partitions (both used simultaneously to give 8G of swap), sda3/sdb3 [80GiB] was going to be the RAID 1 (mirrored) / (root) partition, and sda4/sdb4 [some crazyass number of GiB, like 850 or something] was going to be the RAID 1 (mirrored) with a Logical Volume Manager (LVM) volume group (VG) on top of it (more on that later...). A quick note on the swap partitions: the fact that I did not use a swap file on a RAID partition does mean that if the system is heavily loaded down and swap space is being used and a disk fails, that stuff could crash (programs and possibly even the operating system). However, if swap space is needed, the performance hit of putting it on top of a software RAID implementation would be unforgivable. The system could crash, but if it's brought back up, there's enough swap on one disk to run the system fine on the one functioning swap partition). A compromise that I feel is acceptable to take.

I went ahead and created the mirrored partitions /dev/md0 and /dev/md1 with /dev/sda3:/dev/sdb3 and /dev/sda4:/dev/sdb4 respectively [mdadm --create /dev/md{0|1} --level=1 --raid-devices=2 /dev/sda{3|4} /dev/sdb{3|4}] and created EXT4 filesystems on /dev/sda1, /dev/sdb1, and /dev/md0 (the mirrored disks from the previous step) [mkfs.ext4 /dev/{sda1|sda2|md0}]. I mentioned earlier that LILO can now boot off RAID 1 partitions, but I did not know that at the point that I had done all of this... I installed the Slackware64 13.37 distribution and then started investigating how to do the LILO boot thing properly with my particular configuration. It was then that I learned about the new capability and realized that it would be best if I rolled things back a little and mirrored sda1 and sdb1. I copied the files out of that filesystem into a temporary directory I created, rebooted the system so I could change the partitions from type 83 "Linux" to type fd "Linux raid autodetect" and mirror the partitions. Sadly... the temporary directory I had created was on the RAMdisk that is used by the installation load and when I rebooted, all the files were gone. It was a laughing (at myself) head-desk moment... doh! Well, not such a bad thing (I just needed to re-install the OS, so not a problem at that stage, heh). It also gave me the chance to redo things with the new configuration. I would make /dev/md0 the /dev/sda1:/dev/sdb1 mirrored partition and go from there.

And here's where things took a turn for the argh... I knew I had to re-number the other mirrored partitions so that the /dev/hda4:/dev/hdb4 partition went from /dev/md1 to /dev/md2, and the /dev/hda3:/dev/hdb3 partition went from /dev/md0 to /dev/md1 so I could make the boot one /dev/md0. How to do this? Well, after much research (this is all new functionality, so it's not very well documented anywhere), you stop the mirrored partition (say /dev/mdX for the mirrored partitions /dev/sdaN and /dev/sdbN), re-assign it a new "superblock minor number" (let's say Y), and start it back up again [mdadm --stop /dev/mdX; mdadm --assemble /dev/mdY --super-minor=X --update=super-minor; mdadm --assemble /dev/mdY /dev/sdaN /dev/sdbN] (boy, did it take a long time to figure out how to do that!). Did /dev/md2, then /dev/md1, then created /dev/md0 and everything looked good. Did a "cat /proc/mdstat" and everything was happily mirrored and chugging away. Created an EXT4 filesystem on /dev/md0 and everything looked good. I wiped the filesystem on /dev/md1 to make sure I had a clean installation, did a fresh installation, and rebooted the computer just for good measure and... all the RAID device numbering was messed up! I thought it was hard to figure out how to do the stuff I just did... it had nothing on figuring out how to fix this new problem! The clue came when I looked at the information associated with the RAID devices [mdadm --detail /dev/mdX] and saw that there was a line like "Name : slackware:1" where the number after the "slackware:" seemed to match the "mdX" number assigned... and also corresponded to the number I used to create the RAID partition (which the --update=super-minor command didn't seem to change). I was wondering if this was something that was autogenerated at boot time or whether it was actually in the RAID configuration information stored on the disk... I used the program "hexdump" to look at the contents of the first few kilobytes of data stored in the RAID device block on the disk [hexdump -C -n /dev/mdX] and sure enough, the string "slackware:X" was there. I then had to start the search for how to change the "Name" of a RAID array as apparently this was very new and never used functionality. The built-in help indicated it could be done, but the syntax didn't make sense. Ultimately, I figured it out and changed the name (and re-changed the minor number in the superblock as well just to be sure) [mdadm --stop /dev/mdX; mdadm --assemble /dev/mdY --update=name --name=slackware:Y /dev/sdaN /dev/sdbN; mdadm --assemble /dev/mdY --update=super-minor /dev/sdaN /dev/sdbN; mdadm --assemble /dev/mdY /dev/sdaN /dev/sdbN] and this technique proved reliable and worked like a charm every time (rebooted the system to make sure everything stuck, and it did, yay!). I understand that this is Slackware functionality to guarantee what mdX number gets assigned to a RAID array (where other operating systems can, and do, randomly make assignments), so it's ultimately a Good Thing™, but it's not well documented.

So, it was time to finish up the installation by installing the bootloader. The configuration (in /etc/lilo.conf on the /etc directory for the operating system installed on the disk, e.g. /mnt/etc/lilo.conf if that's where the disk partition with the OS is mounted) was pretty much this (it was having problems with my video card, so I left out the fancy graphical console modes):
lba32 # Allow booting past 1024th cylinder with a recent BIOS
boot = /dev/sda
# Append any additional kernel parameters:
append=" vt.default_utf8=0"
timeout = 50  # In 1/10ths of a second
vga = normal
# Linux bootable partition config begins
image = /boot/vmlinuz
root = /dev/md1
label = Linux
read-only # Partitions should be mounted read-only for checking
Fairly simple stuff, the "boot" line specified the "whole disk" so the bootloader would be installed in the Master Boot Record (MBR) of the drive, it would load the Linux image, and use /dev/md1 as the root filesystem. Simple, except it didn't work!!! LILO, when run [mount /dev/md1 /mnt; mount /dev/md0 /mnt/boot; chroot /mnt lilo -v -v -v], would generate the message "Inconsistent Raid Version information on /dev/md0". Sigh... now what? Well, it turns out that sometime over the past year, the "metadata format" version of the "mdadm" package had changed from 0.9 to 1.2... and LILO did not know how to read the 1.2 version metadata and so assumed the superblock of the RAID array was corrupted (there's a bug report here). It could, according to what I read, understand the 0.9 metadata format, so... copied the files off the /dev/md0 partition (this time onto the actual hard drive, heh) and re-initialized the partition to use the old metadata format (again, it took a huge amount of time to track down the poorly documented command) [umount /mnt/boot; mdadm --stop /dev/md0; mdadm --create /dev/md0 --name=slackware:0 --metadata=0.90 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1; mkfs.ext4 /dev/md0; mount /dev/md0 /mnt/boot]. Once that was done, the boot files could be copied back and lilo run again. When I first tried it, I only installed on /dev/sda and when I tried to boot, it just hung (never even made it to LILO). This confused me, so I checked the boot order of the disks in the BIOS settings. The "1st disk" was set to boot first and then then "3rd disk" if it couldn't. It took me a while, but I eventually tried (out of desperation) to switch the boot order of the disks and ... voila... the LILO boot prompt! Turns out that the disk that Linux thinks is "a", the BIOS thinks is the "3rd" disk, and "b" was the "1st" disk. Live and learn, eh? The trick was it still needed to be installed on both hard disks (each has a separate MBR), so "lilo" had to be run and then the "boot" parameter had to be changed to /dev/hdb in lilo.conf and lilo had to be run again [just "chroot /mnt lilo -v -v -v" once the filesystems were already mounted]. Once I installed on both /dev/sda and /dev/sdb it didn't matter which one I set first, so that was then working the way it should.

Great... right? Sigh... the kernel would load and then panic because it could not figure out how to use the root filesystem (it would give the message: "VFS: Unable to mount root fs on unknown-block(9,1)". I remembered from my digging that the RAID disk devices had the major device number "9", and the root minor device (from above) was "1", so it knew it was trying to load that device, but couldn't. To me, that said that the RAID drivers were not in the kernel and that I would need to build a RAMdisk with the proper kernel modules and libraries for it to properly mount the RAID device as root. I'd had enough and went to bed at that point and took it up the next day. Again, what a pain to find documentation (one of the reasons why I'm writing this all out for posterity's sake... maybe I should write a magazine article, heh)! The trick was to use the "mkinitrd" script that comes with Slackware, and to do that you need to have the installed OS available because the command doesn't seem to be installed on the DVD's filesystem. Once the operating system is mounted [mount /dev/md1 /mnt; mount /dev/md0 /mnt/boot], create a copy of the /proc/partitions file on the disk version of the OS [cat /proc/partitions > /mnt/proc/partitions] (it will be the only file in that proc directory). Edit the /mnt/etc/lilo.conf file to include the line "initrd = /boot/initrd.gz" right below the "image = /boot/vmlinuz" line (and make sure the boot line is "boot = /dev/sda"). Then run the mkinitrd command to create the RAMdisk image and lilo to install it [chroot /mnt mkinitrd -R -m ext4 -f ext4 -r /dev/md1; chroot /mnt lilo -v -v -v]. Change the /mnt/etc/lilo.conf file to "boot = /dev/sdb" and run lilo again [chroot /mnt lilo -v -v -v] to install LILO's configuration on both disks. At this point, you need to delete the "partitions" file on the mounted OS image (it should be an empty directory for the virtual /proc filesystem when it runs) [rm /mnt/proc/partitions].

And that, my friends, is how I spent my summer vacation ;). The system booted (I tried switching boot order via BIOS and it worked fine), mounted its root filesystem, and loaded my shiny new Slackware64 13.37 installation in all its glory. Finally!!! But my journey is far from over... I now have to configure the system and integrate it with the framework I already have running so it could eventually take over from my current server (my plan was to move the pair of 200G disks from the current server to the new one and use them as part of a system backup strategy). I had to install the LVM partition for my data and decide how to carve up the space into Logical Volumes (LVs). I have to decide whether I want to stick with NIS or move to LDAP for authentication (I've been meaning to for a while, but know it's going to be a colossal nightmare), I have to configure Samba (for file and print sharing with Windoze machines), I have to move my web sites to the new box (including migrating the MySQL databases for the Wordpress installations), and then migrate the data from my old server to the new data partitions. Sigh... it's a huge job with so many different technologies (each of which requires a great deal of expertise to use).

Actually, the next thing I need to get working after the upgrade is to sync my server's clock with the NRC NTP servers since the hardware clock on its motherboard swerves like a drunken landlubber on a crooked dock. But that will likely have to wait for the summer.
pheloniusfriar: (Default)
I have spent weeks (months? ... but certainly not too hard) trying to find the answer to how to use X11 with the XFCE window manager on my Slackware server at the recommended resolution of my monitor of 1440x900. This has been startlingly hard to get information on (along with information on how to configure the onboard video hardware on my motherboard), so I have decided to document it here in hopes that anyone else having this problem can find a quicker solution to their woes. I have so much other stuff to post (seriously, I met Buzz Aldrin and Bill Nye and Neil deGrasse Tyson earlier this month, how cool is that?), but I just haven't had any time at all these past few months... I have been slammed so hard with work (both work work and school work and my own work... yes, I know that's 150%). Until then... anyone who doesn't care about xorg.conf files can safely skip the rest of this post ;).

Now, before I go any further, full honours to Arun Viswanathan for figuring it out first: ... thank you Arun!!!

Here are the steps: find the manual for your monitor. I have an eMachines E19T6W monitor, and finding the PDF of the User's Manual wasn't hard at all. Next, there were two sections giving technical information... the Specifications section and a section on Video Modes. The Specifications section stated that the monitor had a 1440x900 native pixel configuration (which you always want to use if you can) and a 0.2835mm x 0.2835mm pixel pitch (which gives a display size of about 408mm x 255mm, which is needed for the xorg.conf file eventually). In the video mode section, it specifies a whole bunch of resolutions, but the 1440x900 mode is given as "Mode 15 - VESA 1440x900 - Horizontal Frequency 55.935kHz - Vertical Frequency 59.887Hz - Available in DVI Mode (19-inch Model)? Yes.". The first set of sorcery is the "gtf" and "xrandr" commands. The former automagically generates a Modeline for the resolution setting you want, and the latter allows you to add it and test it out interactively. The second set of sorcery involves permanently setting up an "xorg.conf" file and XFCE configuration to implement it permanently going forward. To interactively test out the hardware, first get the Modeline needed by running "gtf <horizontal resolution in pixels> <vertical resolution in pixels> <vertical refresh rate in Hz>":
  gtf 1440 900 59.887
which resulted in the output:
  # 1440x900 @ 59.89 Hz (GTF) hsync: 55.81 kHz; pclk: 106.27 MHz
  Modeline "1440x900_59.89"  106.27  1440 1520 1672 1904  900 901 904 932  -HSync +Vsync
This then needs to be used to create that video mode using the "xrandr" program. Note that the word "Modeline" is left off, and also note that I got the tag "VGA-0" as being the port I was using just by running the "xrandr" program with no parameters to get the current state of the display subsystem (it showed VGA-0 connected, and DVI-0 disconnected, which is how my system is configured: I am putting my VGA connection through a KVM so I can share my desktop [in the physical sense] between my server and a desktop PC... I need to get a DVI-capable KVM someday, but it's not a high priority by any means).
  xrandr --newmode "1440x900_59.89"  106.27  1440 1520 1672 1904  900 901 904 932  -HSync +Vsync
  xrandr --addmode VGA-0 1440x900_59.89
  xrandr --output VGA-0 --mode 1440x900_59.89
And this switched the video mode to 1440x900 (you can check just by running "xrandr" with no parameters)! Now one important thing to be said is it actually looked pretty shitty... but the good news is that this was just an intermediate step and as I type this, the display looks marvy! The solution to the quality of the display and fonts and stuff was a two step process. The first step was to create an "/etc/X11/xorg.conf" file on the system for the particular configuration I was using and to restart X11/XFCE again. After the configuration file was created, XFCE came up in 1024x768 mode (which I presume is a fallback setting because just about everything supports that video mode). Typing "xrandr" showed that "1440x900" was a mode that was now supported, but it had not been selected by XFCE. The second part of the solution was to go to the "mouse menu" (in the lower left corner) and run Settings->Display, select the Resolution "1440x900" from the pulldown, and Apply the change. Once that was done, all the weird font and display quality issues I had doing it the manual way above by forcing the issue with the "xrandr" program went away and I had a beautiful desktop to work from! Yay! Just to be sure, I exited X11/XFCE again and restarted it and the settings stayed, so it's a permanent fix.

Taking a step backward, one of the things that I wasn't sure was set up right was the video hardware. It's the onboard video for the old Asus M4A78LT-M LE motherboard I have in my server (if it ain't broke, don't fix it y'all), and I wasn't sure that the right drivers were being used (I was flailing pretty hard trying to get this to work and followed all sorts of weird paths on my way). I should further mention that I am using the onboard video because my server is really a server and I don't need much in the way of video display hardware capabilities (99.9% of the time, I'm connecting to it over the network from another system running X11 or just through SSH or something, so I hardly ever use the console unless I'm doing serious maintenance on it). The video chip is an ATI 760G class chip (Radeon 3000 family), and I read innumerable old posts about how it was not properly supported by the X servers of the day and more recent posts about how ATI has dropped all support for them from their proprietary drivers for Linux systems. It was not looking good at first, but it turns out there is an open source alternative for this class of video hardware that goes by the name "xf86-video-ati" (and shows up in the kernel output as the "radeon" driver). I initially thought this driver was not being invoked even (as I said it was really hard to find information and much of it was conflicting, confusing, or just plain false), but when I finally knew what to look for, I realized the correct driver was running and that it was simply a configuration issue I was dealing with. The breakthrough here happened when I found a Wiki page on it: Once I had that, it was smooth sailing with the configuration options for the driver and my card (which I have reproduced below).

The last thing I wanted to mention is what is required to create a working "xorg.conf" file. Again, one would think that this would be easily accessible information, but one would be wrong... Not to beat around the bush, the first thing that is needed is a "Device" section. This could be quite simple and only contain key/value pairs for "Identifier" and "Driver". I went a bit further with actual configuration parameters, but it's the "Identifier" that is critical to building a working "xorg.conf" file. I used the Wiki page above to get the information needed, and used the model of my motherboard as the identifier value. The next thing that is required is a "Monitor" section. Again, this could have as little as the "Identifier" and "Modeline" keys. In my case, this was given the value "E19T6W", but these are just text strings and could just as easily have been "Fred" or "Wilma", just pick something that makes sense for the monitor you have and the way your brain works (and this is the same for all the identifier values). I went further and used information from the User's Manual for my monitor and put in the minimum and maximum values for the horizontal and vertical frequencies, and also put in the physical dimensions of the screen so that things would display at the correct size (12 point fonts should be 12 point fonts in physical dimensions on the screen, etc.). Fyi, I got the values I used by multiplying the dot pitch by the horizontal and vertical resolutions, but verified those numbers with a ruler, and they were correct. It was in the "Monitor" section where the "Modeline" generated by "gtf" went. Finally, there needs to be a "Screen" section that pulls it all together. I gave this section the uninspired "Identifier" of "Default Screen", but here a pointer to the "Device" and "Monitor" sections to use for the screen are included using their identifier names. The rest of that section is pretty much boilerplate (including the "Display" subsection), but it is probably good to have multiple resolutions available in case you want to swap displays out at some point (if the display you are using fries) as there is usually some keyboard combination that allows you to switch video modes on the fly between supported modes.

The final "/etc/X11/xorg.conf" file that worked for me is as follows (note that I used the Modeline label "1440x900" rather than "1440x900_59.89" as provided by the "gtf" program as I didn't need to support multiple versions of the 1440x900 resolution):
Section "Device"
  Identifier "M4A78LT-M LE"
  Driver "radeon"
    # software cursor might be necessary on some rare occasions,
    # hence set off by default
  Option "SWcursor" "off"
    # supported on all R/RV/RS4xx and older hardware, is on by default
  Option "EnablePageFlip" "on"
    # valid options are XAA, EXA and Glamor. Default value varies per-GPU
  Option "AccelMethod" "EXA"
    # enabled by default on all radeon hardware
  Option "RenderAccel" "on"
    # enabled by default on RV300 and later radeon cards
  Option "ColorTiling" "on"
    # default is off, otherwise on. Only works if EXA activated
  Option "EXAVSync" "off"
    # when on increases 2D performance, but may also cause artifacts\
    # on some old cards. Only works if EXA activated
  Option "EXAPixmaps" "on"
    # default is off, read the radeon manpage for more information
  Option "AccelDFS" "on"

Section "Monitor"
    Identifier      "E19T6W"
    HorizSync       30.0-75.1
    VertRefresh     50.0-75.0
    DisplaySize	    408 255
    Modeline	    "1440x900"  106.27  1440 1520 1672 1904  900 901 904 932  -HSync +Vsync

Section "Screen"
    Identifier "Default Screen"
    Device     "M4A78LT-M LE"
    Monitor    "E19T6W"
    DefaultDepth	24
    SubSection "Display"
       Viewport   0 0
       Depth     24
       Modes    "1440x900" "1280x1024" "1024x768" "800x600" "640x480"
If you have been struggling with something similar to this, I hope this helped you...



pheloniusfriar: (Default)

September 2017

3456 7 89
10 11 1213 141516
171819202122 23


RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Sep. 24th, 2017 09:16 pm
Powered by Dreamwidth Studios