Category: linux



My friend over at brain floss gave a very good tutorial on how to expand your ix2-200 StorCenter without loosing your data. the tutorial is linked here. I wanted to give you more information I discovered on this because my ix2-200 was not exactly setup as the instructions. Having some linux background, I was able to get around this and still able to expand my raid drives using BrainFloss’s instructions and other commnnads in linux. so here it goes.

The objective is as follows

  • Log into ssh using root and the password. (the password may be soho + your admin password)
  • grow the physical volume through the pseudo-device.
  • extend the physical volume by re-sizing it.
  • extend the logical volume group attached to the volume group.
  • finally expand the file system on the logical volume for StorCenter to see the available space.

After logging into ssh I found that the steps i needed to take were a bit different. Here are the steps i ran.

  1. Expand the RAID pseudo-device:

    root@ix2-200d:/# mdadm --grow /dev/md1 --size=max

  2. Expand the LVM physical volume:

    root@ix2-200d:/# pvresize /dev/md1

  3. Determine the free space in the LVM volume group:

    root@ix2-200d:/# vgdisplay (this displayed multpile Logical Volumes)

          --- Volume group ---
          VG Name               54d89fg7_vg
          .
          .
          .
          Free  PE / Size       476930 / 931.50 GB
          --- Volume group ---
          VG Name               md0_vg
          .
          .
          .
          Free  PE / Size       0 / 0
  4. The next step will not only require you to reference the Volume Group above, but also the logical volume attached to it. so we need to run the following to get that info
    root@ix2-200d:/# lvscan              ACTIVE ‘/dev/54d89fg7_vg/lv5d49388a’
    ACTIVE ‘/dev/md0_vg/BFDlv’
    ACTIVE ‘/dev/md0_vg/vol1’
  5. Now we can run brain floss’s next command to extend the logical volume according to the results we returned from our system.root@ix2-200d:/# lvextend -l +476930 /dev/54d89fg7_vg/lv5d49388a
  6. Since we did not unmount the volume we can go right to the next step of expanding StorCenter’s xfs file system.root@ix2-200d:/# xfs_growfs /dev/54d89fg7_vg/lv5d49388a
  7. Now just send command to reboot system so that web management can now see the expanded volume.root@ix2-200d:/# telinit 6

Your device will reboot and (if you have email notifications correctly configured) it will notify you that “data protection is being reconstructed”, a side effect of expanding the outermost RAID container.

Advertisements

So approx. 1 week ago i get questions as to why a clients upload is getting cut off when they are uploading a file to our FTP server. Being that it was only 1 client out of the hundred or so we have using our FTP server, I figured there was something wrong on their end or with the routing. I tried to login myself, uploaded files, moved around, did a little dance, voila everything works fine without issue. Well not so fast, customers continue to have issues and I can’t seem to make sense of it. A couple days after that I get another request to look into why a very frustrated customer is uploading a PDF that is being corrupted. Ok so now something has got to be going on. I check the FTP server, uploads work fine without corruption. I check the server that actually host the clients files, no errors reported. Finally my hair recedes another inch and I continue to look.

So to give a 10,000 foot view of the layout, we have 1 Linux VSFTP server which might I add, rocks!.  Attached via CIFS mount is a windows file server that just does what it’s name is, it serves up files. So I being that this is not my first day at the rodeo (” I have done this before”), I have been fairly confident about this setup. The VSFTP server provides the data link essentially to the file server and all is well. Well after looking and looking some more, I began to run a command in Linux some of you might know as Tail.  In running  ” tail -f /var/log/syslog” I started to see errors pop

And as you can see the CIFS VFS: Write2 ret -13, and I think oh! of course this is as clear as mud!.

All joking aside , I looked for several hours to find some kind of solution for this. Nothing shows online, but 1 thing I did determine was that my Linux release even though it was up to date, it was 2 versions behind. With CIFS errors coming from the Kernel, I decided that ultimately the Kernel will need to be recompiled at the very least to fix the issue with VFS. Or I can just update the entire OS to the latest release and be done with this.

On average every 15 minutes our FTP server is being connected to. Not extremely busy but, also does not give me a window large enough to do a full update. So I decided that I would go with an alternate idea that would minimize downtime to about 10 minutes. Drawbacks? I would be spending the next week building 2 Linux servers with VSFTPD and selecting the best Linux OS to support it. I thought, “This can’t be that hard”.  Sorely mistaken I dove into the task and was suddenly met with problems from the new and improved VSFTPD.

One of the newer features I believe that comes with version 2.5.3 and up is the chroot_users does not allow the root folder to be written to.  This I knew was a problem because of how we have our FTP set up, we need to allow users to write to their root. Knowing that the users root is technically not on the FTP server, I can see no harm in allowing this.

Well none the less the issue remains,  however with the new 3.0 version, you can set a config called allow_writeable_chroot. This will solve the issue for you. Only thing you need to do is to compile the 3.o version yourself till all the Linux versions inherit it. After all said and done, moving to Centos with a self compiled VSFTPD version fixed our problem.


Now as a Sys Admin I always am installing VM’s, testing OS’s hardening and such. One issue I always find myself in is when I have done a fresh OS install I need to get Perl up to date and install Modules that are needed to run some of my scripts. Now I always find myself in CPAN installing modules and finding out that they are failing to Make, and after banging my head into a wall for a while, I came across this fix.


$ sudo apt-get build-essential autoconf automake libtool gdb

This installs all the necessary packages to make CPAN modules. Once you have these installed, open CPAN and go ahead and install all the modules you need to. Oh and this works on Deb. Ubuntu releases of Linux, so if you need to install these for Red Hat you can use Yum and find the compatible packages to help with running Make on CPAN. Simple and quick as that.


For all of you that enjoy working in Linux you may have run across a framework called Mason. Mason is a framework much like catalyst however I think it is much easier to use and less convoluted. It allows you to create objects to get a set data for a webpage. Now I know Perl has its drawbacks like any other language out there. What I do like about it though is that there is a vast library of modules out there that will do just about anything you want it to do.

With the addition of Mason framework, this allows you to create a more object-oriented site as well as create handlers to do some of the heavy lifting and leave Apache to do the HTML rendering. One thing I have used in conjunction with the Mason module, is a module called prototype. This allows you to add Ajax to your web page which gives you a rich user experience. Well I digress.

One thing i wanted to talk about here was how to add the handler to your apache configuration. This is fairly straight forward as any cgi handler is. One thing to make sure of is to have a directory (usually called data) for the caching of the objects to be placed. Another directory you would need to make sure of is to assign the component root so that Mason knows where to start looking for components when called.

Now one trick I found was that I usually create a sub folder in the web root for all my components to go to, but when I use this directory as my component root my AutoHandler does not work and assist with processing the pages correctly. So what is a AutoHandler? it is a page much like Application.cfm or .cfc is for ColdFusion to instantiate your application and set global, session and other variables. The AutoHandler allows you to set up your database connection object along with setup any processing that you would need to be done each time a page is called within the application.

So being that the AutoHandler needs to be run, one thing I have done was to set my component root equal to the root of my application. Now I believe you can add the AutoHandler to apache as the primary cgi handler much like you would add perl as a handler of .pl files. However for the sake of testing and flexibility I found that just using your root as the component root, works just as easy. Now I still use a subfolder to contain all my components, I just need to make sure in the path defined for the component, that I specify the folder it is in. not such a big deal to me.

Other than that, all you need is to set your PerlHandler equal to HTML::Mason::ApacheHandler and possibly use PerlAddVar to set global variables to be persistent. Below is a sample setup I have made for you to see a possible scenario of setting up Mason as the Perl web framework.

%d bloggers like this: