Thursday, January 31, 2013

IPV6 in enterprise best practices/white papers

From time to time I look what's going on on a Nanog mailing list. It is a very interesting mailing list in which quite often something very interesting pops up. You don't have to sign to this mailing list in order to see posts, there are publicly available archives, which might be a better option for those sporadically looking at this list. This time my eye caught a thread with the subject line as in the title of this post. So, since IPv6 is hot topic these days, or at least it seems so, I decided to read through this thread and make summary along with pointers to materials that were linked to.

The thread was started on January 26th, 2013. by Pavel Dimow who asked for a real world example of IPv6 deployment in enterprise. More specifically, he said that he thinks that the procedure to introduce IPv6 is:
  1. Create address plan.
  2. Implement security on routers/switches and then hosts.
  3. Create AAAA and PTR records in DNS.
  4. Configure DCHPv6.
  5. Test IPv6 in LAN.
  6. Configure BGP with ISP.
He also wondered how to maintain PTR records in case SLAAC or DHCPv6 is used and should he use DDNS for that purpose. Finally, he asked weather to use SLAAC or DHCPv6.

The general consensus of repliers was that first IPv6 connectivity to the Internet should be established. The reason is that operating systems prefer IPv6 over IPv4 and if there is AAAA record, along with localy assigned IPv6 address, then IPv6 connection will be first established. Since, if you configure Internet connectivity as a last step, there is no path to destination, timeouts will have to expire in order to detect missing IPv6 connectivity and in the end users will experience delays. This scenario actually happened in one network I used. Namely, intranet Web server was given IPv6 address to test that IPv6 worked.  Since all operating systems today have IPv6 enabled by default clients on a local network tried to connect to Web server using IPv6 which wasn't possible since only a small part of intranet got IPv6 connectivity. Still, it turns out that it is possible to configure address preferences in an OS (though, I don't know which ones yet). And, there is a draft that defines how address preferences can be distributed via DHCPv6.

After obtaining addresses from ISP and making address plan the next step would be to configure network equipment, preferably not everything, something for testing. Very important is to get at least some experience with IPv6 before deploying it in a production environment. To get experience there are tunnel broker services that are free and very good. apparetnly also allows free IPv6 BGP connectivity via tunnels.

Here is more specific series of steps to introduce IPv6. This one was written by a person doing an actual deployment. Note that deployer had its own ASN:
  • get a /48 PI from the local LIR
  • configure the border routers to announce the prefix and do connectivity tests (ping Google/Facebook addresses using an IPv6 address from our own /48 - loopback on the router)
  • configure IPv6 addresses on internal router and do connectivity tests again
  • configure firewall interfaces with IPv6 addresses and again connectivity tests
  • configure IPv6 firewall rules (mostly a mirror of the IPv4 rulesets)
  • configure IPv6 address on DMZ servers (actually the first one configured were the DNS servers)
  • do connectivity tests again
  • publish IPv6 records for the DNS servers and for the domain and run ping/telnet 80 tests from another ipv6 enabled network to check that everything is OK.
  • publish AAAA records for all the hosts in the DMZ and making sure all the services available on IPv4 were also available on IPv6
  • did the same for the servers in the "Server network"
  • last step was to enable IPv6 on the network that served the users using RA with the stateful configuration bit set on the firewall and DHCPv6 to serve up DNS servers for IPv6
Security is very important aspect in any network, so it is in IPv6, too. Some of the IPv4 security mechanisms translate to IPv6 security, e.g. DHCP snooping, but there are some IPv6 specific things to be aware of, like RAs.

Scalability is other very important aspect of any network. There was subthread about snooping MLD, or lack of snooping. Namely, there are high density VM deployments in which even high end switches don't have enough processing/storage power. In that case, multicasting degrades to broadcasting. In one post a poster asked about some figures from real world switches, e.g. maximum number of multicast groups, but unfortunately, there was no answer.

Finally, very good source of different documentation about IPv6 deployment is Internet Society's Deploy360 pages. There are documents that describes how to develop address plan and Aaron Hughes presentation from NANOG.

Tuesday, January 29, 2013

How to change Volume Group's name...

In default installation of CentOS LVM is used and all volume groups are named VolGroup00. This can create problems when multiple machines' disks have to be accessed from a single machine. So, one of the options is to rename volume groups. This is actually very easy to do in the following four steps that can be done on a live machine:
  1. Rename volume group.
  2. # vgrename VolGroup00 <newname>
  3. Change /etc/fstab
  4. Open it in some text editor and do a search and replace through the file, i.e. any occurrence of VolGroup00 change to <newname>.
  5. Change /etc/boot/grub.conf
  6. Open it in some text editor and do a search and replace through the file, i.e. any occurrence of VolGroup00 change to <newname>.
  7. Recreate initrd image.
  8. First, rename old initrd image. initrd images are in /boot directory and their name contains the version of currently running kernel (use uname -r but without architecture part).
    # initrd <initrdname> <kernel version>
    Be careful that you don't have newer kernel installed which will be started during the next boot process. In that case you'll have problems! Maybe it's best to restart machine before doing this whole procedure.
Restart machine and that should be it. :) Of course, just to be safe try this first on some test machine.

Tuesday, January 22, 2013

Using ~ as a shortcut for home directory...

I just stumbled on the question why tilde (~) is used as a shortcut for home directory on Unix. It's very interesting since it never occurred to me to ask this question? :)

Then, there are lot more questions like that one on StackExchange, here is a selection of some interesting ones (to me at least):
And, for the end, here is why vi uses hjkl for cursor movement.

Thursday, January 17, 2013

USB cable and strange behavior with disk in enclosure...

I think one of disks in USB disk enclosure I had just got broken because of faulty, or something, USB cable. Now, I don't know how it is possible, nor what exactly happened, but I have a strong feeling that I'm right. Namely, what happened is that when I plugged the cable into the enclosure I heard strange sounds, like the heads are trying to move but are being retracted back to initial position; a series of clicks, about a second apart. That happened almost every time I used that cable. At first, I thought that the problem is that the USB ports are USB3.0 while enclosure is USB2.0 and something is wrong with currents or who know what. But googling didn't turned anything about that. Then, I tried another one and disk worked normally. WTF?!

Well, I found out that when power source isn't strong enough the symptoms are clicking that's heard in the disk. In that case you should unplug the disk as soon as possible. Also, you probably received additional cable with a caddy that will allow you to solve this problem. What happened in my case, probably, is that the cable is somehow faulty and probably decreased current so that disk didn't have enough power.

Seagate disk SMART values...

I was just looking at smartctl output from one of my disks, and it had a large number for Seek error rate attribute (note that I edited the output for readability):

Vendor Specific SMART Attributes with Thresholds:
  1 Raw_Read_Error_Rate     0x000f 100 253 006 Pre-fail Always      -       0
  3 Spin_Up_Time            0x0003 098 098 000 Pre-fail Always      -       0
  4 Start_Stop_Count        0x0032 100 100 020 Old_age  Always      -       826
  5 Reallocated_Sector_Ct   0x0033 100 100 036 Pre-fail Always      -       0
  7 Seek_Error_Rate         0x000f 072 060 030 Pre-fail Always      -       17262017054
  9 Power_On_Hours          0x0032 087 087 000 Old_age  Always      -       11538
 10 Spin_Retry_Count        0x0013 100 100 034 Pre-fail Always      -       0
 12 Power_Cycle_Count       0x0032 100 100 020 Old_age  Always      -       838
187 Reported_Uncorrect      0x0032 100 100 000 Old_age  Always      -       0
189 High_Fly_Writes         0x003a 100 100 000 Old_age  Always      -       0
190 Airflow_Temperature_Cel 0x0022 067 043 045 Old_age  Always  In_the_past 33 (0 17 33 31 0)191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age  Always      -       0
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age  Always      -       809
193 Load_Cycle_Count        0x0022 001 001 000 Old_age  Always      -       351945
194 Temperature_Celsius     0x001a 033 057 000 Old_age  Always      -       33 (0 11 0 0 0)195 Hardware_ECC_Recovere 0x0012 097 044 000 Old_age  Always      -       196225726
197 Current_Pending_Sector  0x0010 100 100 000 Old_age  Offline     -       0
198 Offline_Uncorrectable   0x003e 100 100 000 Old_age  Always      -       0
199 UDMA_CRC_Error_Count    0x0000 200 200 000 Old_age  Offline     -       1
200 Multi_Zone_Error_Rate   0x0032 100 253 000 Old_age  Always      -       0
202 Data_Address_Mark_Errs  0x0000 100 253 000 Old_age  Offline     -       0
254 Free_Fall_Sensor        0x0000 100 253 000 Old_age  Offline     -       0
It's not the first time I saw such a large raw values which, while not problematic (VAL/WOR/THR should be actually monitored) are nevertheless interesting, to say at least. While searching around I stumbled on a post Seagate's Seek Error Rate, Raw Read Error Rate, and Hardware ECC Recovered SMART attributes. In this post, the author explains that all the values are actually 48 bits, and due to the way they are encoded it follows that those values are large. More specifically, raw value of the Seek error rate attribute should be converted to hexadecimal and then upper 16 bits are number of errors, while lower 32 bits are total number of seeks.

In this concrete case the raw value for Seek error rate is 17262017054, or 0x000404E57A1E. The first 16 bits is 0x0004 and the last 32 bits are 0x04E57A1E. What this means is that there were 4 seek errors (meaning the head wasn't positioned correctly after being moved to some track) but there were 82147870 seeks in total. So, this is very very small fraction of errors.

For the meaning of Seek error rate attribute, and many others, I recommend Wikipedia's page about SMART.

Friday, January 4, 2013

Partition vs. whole disk and creating encrypted filesystem...

With a new laptop I got a 1T disk which I intend to use as a data disk. So, it will have a single encrypted partition. This is a new disk with a 4K sector size and because of that fdisk tool offers me to start partition on 2048th sector. This is some alignment stuff from the old days of MSDOS, and obviously I don't want to waste disk space for those reasons. You can read more about that on Linux ATA Wiki. Linux is the only OS I'll use with this disk. It is possible to start partition from 63rd sector but if you are using fdisk you'll have to first create a partition and then switch to expert menu (option x) where it is possible to move beginning of the partition from 2048th to 63rd sector (option b).

Now, it is also possible to use the whole disk for a filesystem, without partition table. I found some discussions of pros and cons of this approach. Additional question is if the Fedora will recognize such disks during a boot process. LWM HOWTO also talks about this issue. It seems that everything boils down to the problem if some other tools or operating systems, that expect disk to be partitioned, treat disk as unpartitioned and thus destroy data on it. Also, someone noted possible performance degradation, but this was not confirmed by simple testing (look at the first link I gave), and besides, why would that happen when you use the whole disk? It can not be better aligned, can it? Also, someone used the whole disk for his Gentoo OS and then he had to install GRUB. Since GRUB, during installation, asks you whether you want it to be installed on, e.g. /dev/sda or /dev/sda1, it seems that it isn't important if you don't have partition table. But, I didn't go more deeper in this.

In the end, I decided to use the whole disk, no partitions. This disk will hold a single partition, will have only data on it, it will never be used on anything other than Linux, actually, on anything other than my laptop. So, this is the way I decided to go.

So, from that point on everything was very simple:
  1. Encrypt the whole disk
  2. # cryptsetup luksFormat /dev/sdc

    This will overwrite data on /dev/sdc irrevocably.

    Are you sure? (Type uppercase yes): YES
    Enter LUKS passphrase:
    Verify passphrase:
  3. Open crypted disk:
    # cryptsetup luksOpen /dev/sdc cryptodev1
    Enter passphrase for /dev/sdc:
  4. Create file system:
  5. # mkfs -t ext4 /dev/mapper/cryptodev1
    mke2fs 1.42.5 (29-Jul-2012)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=0 blocks, Stripe width=0 blocks
    61054976 inodes, 244190134 blocks
    12209506 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=4294967296
    7453 block groups
    32768 blocks per group, 32768 fragments per group
    8192 inodes per group
    Superblock backups stored on blocks:
            32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
            4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
            102400000, 214990848

    Allocating group tables: done
    Writing inode tables: done
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done
  6. Remove reserved blocks (5% by default):
  7. # tune2fs -m 0 /dev/mapper/cryptodev1
    tune2fs 1.42.5 (29-Jul-2012)
    Setting reserved blocks percentage to 0% (0 blocks)
  8. Finally, mount a disk:
  9. # mount /dev/mapper/cryptodev1 /mnt
And that's basically it. When you want to use disk, and it is not mounted, then you first have to open crypted device (step 2) and then you mount newly created file system.

Thursday, January 3, 2013

Signing XML document using xmlsec1 command line tool

Suppose that you have some XML document you wish to sign. It turns out it's very easy to do so because there is xmlsec library, and in particular xmlsec1 command line tool that's standard part of Fedora Linux distribution. The only problem is that its very picky and not very informative when it comes to error logging, finally, there are a lot of small details that can catch you. Since I had to sign a document I spent some time trying to figure out how to do that. In the end, I managed to do it and I'll write here how to for a future reference. Before I continue you'll need certificate and a key to be used for verification and signing. They are not the topic of this post so I'll just give you them: private key, certificate, and CA certificate.

Ok, let's assume that you have the following XML document you wish to sign:
<?xml version="1.0" encoding="UTF-8"?>
  <firstelement attr1="attr1">
    Content of first element.
    <secondelement attr2="attr2">
      Content of the second element.
      <thirdelement attr3="attr3">
        And the content of the third element.
Basically, you can take any XML document you wish. I'll suppose that this XML document is stored in the file tosign.xml. If you typed yourself XML document, or if you just want to be sure, you can check if XML is well formed. There is xmllint tool that surves that purpose. Just run it like this:
$ xmllint tosign.xml
And if you don't get any error messages, or warnings, that the XML document is well formed. You can also check if the document is valid by providing schema, or DTD, via appropriate command line switches.

In order to sign this document you have to add XML Signature fragment to the XML file. That fragment defines how the document will be signed, what will be signed, and, where the signature, along with certificate, will be placed. The fragment has the following form:
<Signature xmlns="">
    <CanonicalizationMethod Algorithm=""/>
    <SignatureMethod Algorithm=""/>
        <Transform           Algorithm=""/>
        <Transform Algorithm=""/>
      <DigestMethod Algorithm=""/>
      <DigestValue />
  <SignatureValue />
    <X509Data />
Note that this (quite verbose) fragment has to be placed somewhere within the root element. Now, lets sign this, newly created document. To do so invoke xmlsec1 command like this (this is one line in case it is broken into two due to the formatting):
xmlsec1 --sign --privkey-pem privkey.pem,cert.pem --output signed.xml tosign.xml
After this, the signed XML document will be in the file named signed.xml. Take a look into it, the placeholders within signature fragment are filled up with signature data, and with a certificate who's private key was used to sign the XML document.

Note that signature itself is generated using private key (privkey.pem) which, as its name suggest, has to be private for a signer. Otherwise, anyone can falsify signature.

Now, to verify signed XML document you have to specify trusted CA that will be used to verify signature. It has to be certificate of the certificate authority (CA) that issued signer's certificate. In my case that's cacert.pem, i.e.:
$ xmlsec1 --verify --trusted-pem cacert.pem signed.xml
SignedInfo References (ok/all): 1/1
Manifests References (ok/all): 0/0
As you can see, the signature was verified OK. You can try now to change something in the XML document and see if the verification passes or not.

I'll mentioned one more thing before concluding this post. Namely, in the previous example the whole XML document was signed. But, you can sign only a part. To do so, you have to do two things. The first one is to mark the element that you wish to sign (its content will also be signed) and the second is to tell xmlsec1 to sign only that element.

The first step is accomplished by adding attribute to the element that should be signed. Let's assume that in our case we only want secondelement to be signed. Modify the appropriate opening tag to have the following form:
<secondelement attr2="attr2" id="signonlythis">
Note that I added attribute id, but basically any name can be used (unless you use some predefined schema or DTD).

The second step is to tell xmlsec1 that only this element should be signed. This is accomplished by modifying Reference element to have the following form:
<Reference URI="#signonlythis">
If you now try to sign this modified XML document using command I gave above, you'll receive an error message:
$ xmlsec1 --sign --privkey-pem cert.key,cert.pem --output test_signed.xml tosign.xml
func=xmlSecXPathDataExecute:file=xpath.c:line=273:obj=unknown:subj=xmlXPtrEval:error=5:libxml2 library function failed:expr=xpointer(id('signonlythiselement'))
func=xmlSecXPathDataListExecute:file=xpath.c:line=356:obj=unknown:subj=xmlSecXPathDataExecute:error=1:xmlsec library function failed:
func=xmlSecTransformXPathExecute:file=xpath.c:line=466:obj=xpointer:subj=xmlSecXPathDataExecute:error=1:xmlsec library function failed:
func=xmlSecTransformDefaultPushXml:file=transforms.c:line=2405:obj=xpointer:subj=xmlSecTransformExecute:error=1:xmlsec library function failed:
func=xmlSecTransformCtxXmlExecute:file=transforms.c:line=1236:obj=unknown:subj=xmlSecTransformPushXml:error=1:xmlsec library function failed:transform=xpointer
func=xmlSecTransformCtxExecute:file=transforms.c:line=1296:obj=unknown:subj=xmlSecTransformCtxXmlExecute:error=1:xmlsec library function failed:
func=xmlSecDSigReferenceCtxProcessNode:file=xmldsig.c:line=1571:obj=unknown:subj=xmlSecTransformCtxExecute:error=1:xmlsec library function failed:
func=xmlSecDSigCtxProcessSignedInfoNode:file=xmldsig.c:line=804:obj=unknown:subj=xmlSecDSigReferenceCtxProcessNode:error=1:xmlsec library function failed:node=Reference
func=xmlSecDSigCtxProcessSignatureNode:file=xmldsig.c:line=547:obj=unknown:subj=xmlSecDSigCtxProcessSignedInfoNode:error=1:xmlsec library function failed:
func=xmlSecDSigCtxSign:file=xmldsig.c:line=303:obj=unknown:subj=xmlSecDSigCtxSigantureProcessNode:error=1:xmlsec library function failed:
Error: signature failed
Error: failed to sign file "tosign.xml"
The problem is that URI attribute references ID attribute of an element. But, ID element isn't recognized by name but has to be specified in DTD or in schema, depending what you have. In our case there is neither schema nor DTD and thus ID isn't recognized by xmlsec1. So, we have to tell it what is the name of the ID attribute, and that can be done in two ways. The first one is by using command line switch --id-attr, and so the command to sign this document is:
xmlsec1 --sign --privkey-pem privkey.pem,cert.pem --id-attr:id secondelement --output signed.xml tosign.xml
The name after the column is the attribute name that is ID. Default value is "id", but can be anything else. If it is "id", then it can be omitted. The argument to --id-attr is element whose attribute should be treated as an id. You should also be careful of namespaces. If they are used then the namespace of the element has to be specified too, and not shorthand but the full namespace name. Finally, note that XML is case sensitive!

The other possibility is to create DTD file and to give it as an argument to xmlsec1. In this case, DTD should look like this (I'll assume that this is the content of a file tosign.dtd):
<!ATTLIST secondelement id ID #IMPLIED>
And you would invoke xmlsec1 like this:
xmlsec1 --sign --privkey-pem privkey.pem,cert.pem --dtd-file tosign.dtd --output signed.xml tosign.xml
Note that you'll receive a lot of warnings (DTD is incomplete) but the file will be signed. To check the signature, you again have to specify either --dtd-file or --id-attr options, e.g.
xmlsec1 --verify --trusted-pem cacert.pem --id-attr:id secondelement signed.xml
Now, you can experiment to check that really only secondelement was signed and nothing else.

Final note. You have to put XML signature fragment in XML file you are signing. What can confuse you (and confused me) is that there is an option sign-tmpl that adds this fragment, but it is very specific and used only for testing purposes.

Tuesday, January 1, 2013

Deprecated functions in ffmpeg library

Well, I have some code that uses some old FFMPEG library, and now, as I updated my laptop to Fedora 18, it turns out that those functions are gone for good. I found some resources about how to port old code (here, here and here), but since it wasn't what I needed I decided to write my own version. So, here we go.


This function has been changed to avio_open. There is also url_close which is renamed to avio_close. This information I found here.


This function is still supported as of FFMPEG 1.0.1 but it is marked as deprecated. It will be replaced with avformat_new_stream(). Suppose that the old code was:
AVStream *st = av_new_stream(oc, i);
the modified code should be:
AVStream *st = avformat_new_stream(oc, NULL);
st->id = i
Be careful to check first that st isn't NULL!


This function was renamed to av_dump_format().


Replaced with avformat_write_header() that accepts two arguments instead of one. Pass NULL as the second argument to get identical behavior to the old function.


This one is replaced with av_codec_open2(). The replacement function accepts three arguments instead of two, but put NULL as a third argument to get the same behavior as the old function.


Replaced with avcodec_encode_audio2().


I couldn't fine the replacement for this one. First, I've found that this function doesn't have replacement. But it was when it was still available in FFMPEG, even though deprecated. Then, they removed it, and thus it has to have replacement. In certain places I found that they only disabled it, on others that its parameters have to be passed  to avformat_write_header. In the end, I gave up because I didn't need working version of that part of the code for now. Since in my case avformat_alloc_context() is called and then av_set_parameters(), last what I looked at was to call avformat_alloc_output_context2() instead of avformat_alloc_context(). But the change is not trivial so I skipped it.


This enum has been renamed AVSampleFormat.


This constant has been replaced with AVIO_FLAG_WRITE.


Those are prefixed now with AV_, so use AV_SAMPLE_FMT_U8, AV_SAMPLE_FMT_S16, etc.

Fedora 18 installation

The first day of 2013. I switched to a new laptop, Lenovo W530, and Fedora 18. In this post I'll document what works and what doesn't work. Note that because of that this will be live post. Basically, this post originates from somewhere around Fedora 16 time and I never got it into the state I thought it was good enough to be published. But, then, I realized that it will never be finished, so I decided to turn it to published post. Note that I reworked this post to be exclusively about Fedora 18 on Lenovo W530. At the time this installation was performed Fedora 18 was still in beta stage, so, things might differ after Fedora 18 final is released. I decided to publish this post in unfinished state and to use it to document the progress I'm having with transition to a new laptop.

As usual, there are other resources on the Internet about Linux on W530, and here are some more interesting I managed to find:
Also, there are some pages with information (somewhat) relevant to this combination (Fedora 18 and W530):


I bought the W530 with 1T internal disk and 8GB RAM. And this was pretty good deal for this laptop. Additionally, on eBay I bought 512GB SSD disk and 32GB of RAM (4x8GB). The monitor has resolution of 1920x1600 and NVidia Card. The resolution was one of the things I wasn't particularly happy with the previous laptop, which has 1600x1080.

Installation and first boot

I decided to use PXEBOOT to boot the machine and then to install it over the network. It turned out that I had some problems with DHCP on my work network. Additionally, I had problems with Fedora's new installer which cause many errors during disk partitioning time. Everything boiled down to BTRFS selection being completely broken. LVM was much better, but it also had some quirks (like embedding host name in logical volume names). This host embedding was removed later. It should be noted that this was constantly worked on so if I tried the same thing some other day, it could work. But, I didn't want to wait for some other day and in the end, I decided to mirror Fedora 18 development repository on my previous laptop and then to install it from there. The setup was basically the following one: old and new laptops connected with crossover Ethernet cable, and the old laptop was connected to the Internet with wireless Ethernet. On old laptop I mirrored the whole Fedora 18 directory tree. I also configured DHCP/TFTP and Apache to be able to do installation. I won't go into details how to do that because there is a manual on Fedora pages which is quite good.

First boot

First boot is as usual except one tiny annoyance. Namely, for my UID and GID I use a specific values for a long time and first boot configuration screen doesn't allow me to proceed without defining new user while in the same time it


Customization consists of tweaks to the system and adding external repositories in order for me to be able to install mplayer and similar software not distributed within Fedora, at least not in a usable way.

System customizations

One thing I change is the following line in the /etc/nsswitch.conf file:
hosts:     files mdns4_minimal [NOTFOUND=return] dns
to be
hosts:     files dns
The reason is that in some local networks I'm using .local domain suffix and by default such names are resolved using mDNS (mdns4_minimal option). Since I'm using regular DNS for those names too, then they are unresolvable unless I make this change.

RPM Fusion

RPM Fusion has some packages that are not shipped with Fedora. For example, different audio and video codecs are not in Fedora due to the patent or some other issues. In that case you need RPM Fusion. RPMFusion supports different versions of Fedora, you can find a list here. You have to select one free and one non-free repository, copy link and paste it to the terminal as an argument to 'rpm -i' command, or 'yum localinstall' command. This will add necessary yum configuration files. Now, you can install, for example, mplayer and vlc:
yum install mplayer vlc
There are other interesting packages, but I'll let you explore those for yourself.

Adobe Flash and Acroread

YouTube works without Flash thanks to the HTML5 support in Firefox. But, not all videos on YouTube can work in HTML5 and also, there are sites on the Internet that can not live without Flash, so it has to be installed too.

On the download page there is an option to retrieve YUM configuration files. So, chose YUM and then you'll be offered to download file, about 4K in size. After you download it somewhere to your disk, install it using 'yum localinstall' command. Now, you can install flash using the following command:
yum install flash-plugin
As for Acrobat Reader, you have to download rpm file and install it "manually". But, I think that it isn't so necessary because Evince works very well. There are sporadic cases when Evince has problems, mainly due to the fonts, but otherwise it's very good replacement.

Google Chrome

To install Google Chrome you'll need first to install Google Chrome manually and then it'll add Google Chrome repositories. But, you can skip "manual" installation, i.e. add yum repository and install Chrome using yum. To do so create file /etc/yum.repos.d/google-chrome.repo and copy the following content into that file:
Additionally, you have to install Google's signing key (or set gpgcheck in yum configuration file to 0, which is not advisable). Anyway, use the following two commands to import Google's rpm signing key:
rpm --import
Now, just run yum install google-chrome-beta.x86_64, or google-chrome-stable.x86_64 or google-chrome-unstable.x86_64; depending on which version you want to run. Note that there are some other packages in the Google's Chrome repository. Use 'yum list google\*' command to get a list of those.


Fedora has a lot of virtualization options to choose from.

VirtualBox is part of RPMFusion free repository. So, you don't need to add anything extra to be able to install it, just run:
yum install VirtualBox
and that's it. Alternatively, you might want to install "official" Oracle's version. Oracle has yum repositories for Fedora (though not for Fedora 18 at the time this post was written) which you can find here, along with instructions on how to install those repositories.

In case you are using VMWare Workstation, you'll have to download it from the VMWare's Web pages. I downloaded 64-bit trial version of VMWare 9.0.1 and installed it. It works, even though during installation process it created a file named ~ (tilde). It had exactly 1K, but I don't know what it is. Could be some problem in the installation script. Apart from that, seems that VMWare works without any problems.

Note that on January 6th, kernel was updated to version 3.7.1 (to be precise 3.7.1-2). VMWare, as of 9.0.1, isn't compatible with that version of kernel and it doesn't work! But, the solution is simple and easy to find on a net, execute the following commands (as root) and everything should work again:
cd /usr/src/kernels/3.7.1-2.fc18.x86_64/include/linux
ln -sf /usr/src/kernels/3.7.1-2.fc18.x86_64/include/generated/uapi/linux/version.h .
That are two lines,but ln command is broken on a dash sign so when copying it join the two parts together without any spaces in between. Also, if the kernel version is different just change appropriate substrings.

In case you have version 3.7.1-5 then version.h is removed and when you start VMWare Workstation it says it needs to rebuild drivers and after you confirm that then it complains that there are no kernel headers. To fix this problem, execute the following two lines:
cd /usr/src/kernels/3.7.1-5.fc18.x86_64/include/linuxln -sf /usr/src/kernels/3.7.1-5.fc18.x86_64/include/generated/uapi/linux/version.h .

Removing unnecessary software

This is something I did very thoroughly before, but as the time passes I do it less and less. The disk space these days is very cheap and there is a plenty of it, also inter dependencies between packages are complex, so these days I do only few adjustments.

Removing Asian and Arabic fonts

I decided to remove those simply because it annoys me to have such a large number of options in dialogs where I need to select font to use, many of which I simply don't understand! So, I removed the packages that start with paktype and lohit using yum (i.e. issue yum remove paktype\*, lohit\*), wqy-zenhei-fonts, thai-scalable-waree-fonts, cjkuni-uming-fonts, jomolhari-fonts, vlgothic-fonts vlgothic-fonts-common un-core-dotum-fonts smc-meera-fonts sil-padauk-fonts sil-abyssinica-fonts paratype-pt-sans-fonts lklug-fonts khmeros-base-fonts.

UI Tweaks

I installed gnome-tweak-toolgcond-editor and dconf-editor packages to be able to tweak UI. Basically, a lot of things can be done from the Gnome's Tweak tool. But many can not. For example, modal windows by default are attached to a windows that opened them, like it is done on MacOS X. But, I prefer them to be detached so that I can move them and access content behind them. So, to change this behavior you should set /desktop/gnome/shell/windows/attach_modal_dialogs to true, e.g. like this (note that this should be a single line):
gconftool-2 --toggle /desktop/gnome/shell/windows/attach_modal_dialogs
This will toggle the value, if it was true it will become false and vice versa. To query current state use the following form:
gconftool-2 --get /desktop/gnome/shell/windows/attach_modal_dialogs
If you want to disable Fedora package search in Gnome, there is a boolean key that controls that: Also, when you install Fedora fedmsg is enabled by default. You can disable it by toggling its key org.fedoraproject.fedmsg.notify.enabled.
 For the last two keys you should use dconf, not gconf. Also note that I had some problems using command line client (probably my fault), so I suggest you use editor to inspect and change those values.

Successes and Problems

Failed login problem

For some unknown reason I'm unable to login in GNOME if SELinux is enabled. So, when I boot machine I have to first switch to some virtual console, login there as a root and issue 'setenforce 0'. I could do that accross boots (by modifying /etc/sysconfig/selinux file) but I want SELinux to be enabled so I'm waiting for this issue to be fixed.

 Audio problem

I had problems trying to play audio. Can't remember if that was the problem from the beginning or only after some update I did. Anyway, it turns out the problem is with permissions. Namely, I, as an ordinary user, don't have permission to access devices and so PulseAudio is using dummy device. I searched a bit, but couldn't find. Temporary fix is to switch to root user and change ownership of /dev/snd directory to my username (chown -R username /dev/snd). Basically, PulseAudio immediatelly notices this and activates sound.

Video problems

Because of permissions I also had problems with gnome-shell and video. Namely, gnome-shell was taking 400% CPU (well, 4 CPUs actually) but the problem was that it was doing software rendering. Running gnome-shell from the command line I got the following error:
libGL error: failed to load driver: i965
libGL error: Try again with LIBGL_DEBUG=verbose for more details.
Two things confused me here. First, is X11 using NVidia or Intel? And second, why it was failing? So, I rerun gnome-shell with LIBGL_DEBUG set to verbose (and exported) and it was a bit more informative:
libGL: OpenDriver: trying /usr/lib64/dri/
libGL error: failed to open drm device: Permission denied
libGL error: failed to load driver: i965
libGL: OpenDriver: trying /usr/lib64/dri/
libGL: Can't open configuration file /home/sgros/.drirc: No such file or directory.
libGL: Can't open configuration file /home/sgros/.drirc: No such file or directory.
When I saw permission errors I immediately knew that this was the same bug as for audio. So, I did again chown -R username /dev/dri and restarted gnome-shell. Now, gnome-shell wasn't even on the process list.

As for the question Intel or NVidia, glxinfo shows that it is using Intel. When I rebooted and looked into BIOS settings it turned out that NVIDIA Optimus dispaly setting was selected. What that setting does is that it activates both cards, but Intel is used by default and NVidia only when requested. To be able to use such configuration you'll need to install Bumblebee program.


As of this writing, to have working GNOME and to be able to login, after boot finishes and GDM presents you with a login screen you should switch to second virtual console (Alt+F2), login as root and execute the following commands:
setenforce 0
chown username /dev/dri /dev/snd
This isn't necessary any more, at least not on fully patched Fedora 18 as of March 4th.

Other notes

After working on W510 for two years I have to say that I'll need for some things time to get used to. First, keyboard is a bit different. Esc key is much smaller, PgUp and PgDn are with arrow keys instead in top left part of the keyboard. Actually, I'll have to get used to the placement of all the other navigation keys as well.

Also interesting is that there is no Caps Lock keyboard indicator, also, there are no Num Lock and Scroll Lock keys. It is problematic when you turn on Caps Lock without knowing it and suddenly things don't work and you don't know why until you realize that the problem is in Caps Lock. I think there should be led indicator on the keyboard, but since there isn't I found gnome shell extension that adds indicator to the panel. Since I don't have Num Lock I turned out its indication.

Happy New Year!

Well, the first day of the New Year! I wish everyone reading this blog happy New Year and a lot of success, health and all the other things that are so important, and not so important, in life. Actually, I wish that to everyone, not just those reading this blog. ;) Ok, I'll stop here, I could become pathetic and that's not good. :)

About Me

scientist, consultant, security specialist, networking guy, system administrator, philosopher ;)

Blog Archive