Category: Software



My friend over at brain floss gave a very good tutorial on how to expand your ix2-200 StorCenter without loosing your data. the tutorial is linked here. I wanted to give you more information I discovered on this because my ix2-200 was not exactly setup as the instructions. Having some linux background, I was able to get around this and still able to expand my raid drives using BrainFloss’s instructions and other commnnads in linux. so here it goes.

The objective is as follows

  • Log into ssh using root and the password. (the password may be soho + your admin password)
  • grow the physical volume through the pseudo-device.
  • extend the physical volume by re-sizing it.
  • extend the logical volume group attached to the volume group.
  • finally expand the file system on the logical volume for StorCenter to see the available space.

After logging into ssh I found that the steps i needed to take were a bit different. Here are the steps i ran.

  1. Expand the RAID pseudo-device:

    root@ix2-200d:/# mdadm --grow /dev/md1 --size=max

  2. Expand the LVM physical volume:

    root@ix2-200d:/# pvresize /dev/md1

  3. Determine the free space in the LVM volume group:

    root@ix2-200d:/# vgdisplay (this displayed multpile Logical Volumes)

          --- Volume group ---
          VG Name               54d89fg7_vg
          .
          .
          .
          Free  PE / Size       476930 / 931.50 GB
          --- Volume group ---
          VG Name               md0_vg
          .
          .
          .
          Free  PE / Size       0 / 0
  4. The next step will not only require you to reference the Volume Group above, but also the logical volume attached to it. so we need to run the following to get that info
    root@ix2-200d:/# lvscan              ACTIVE ‘/dev/54d89fg7_vg/lv5d49388a’
    ACTIVE ‘/dev/md0_vg/BFDlv’
    ACTIVE ‘/dev/md0_vg/vol1’
  5. Now we can run brain floss’s next command to extend the logical volume according to the results we returned from our system.root@ix2-200d:/# lvextend -l +476930 /dev/54d89fg7_vg/lv5d49388a
  6. Since we did not unmount the volume we can go right to the next step of expanding StorCenter’s xfs file system.root@ix2-200d:/# xfs_growfs /dev/54d89fg7_vg/lv5d49388a
  7. Now just send command to reboot system so that web management can now see the expanded volume.root@ix2-200d:/# telinit 6

Your device will reboot and (if you have email notifications correctly configured) it will notify you that “data protection is being reconstructed”, a side effect of expanding the outermost RAID container.

OSSEC Agent Fails to start


Recently I was adding the OSSEC agent to a windows 2008 r2 machine and ran into a issue where the the agent was failing to start saying [check config!]. Well I had never changed the config so there should not be anything wrong with it. After some searching I found a post suggesting to go to regedit and modify HKLM\system\currentcontrolset\services\OssecSrv ImagePath – c:\program files (x86)\ossec-agent\ossec-agent.exe. Add quotes around “c:\program files (x86)\ossec-agent\ossec-agent.exe” and restart. After restarting I still had no luck. Then it came to me.

I went into regedit again and realized before I got to the regedit screen, I was interupted by the User Acess Control prompting me to lower my settings, or to run regedit as an administrator. Well if the agent is attempting to start, but being interupted by UAC, then this would surely prevent it from reading the config correctly. so here is what I did.

Go to Start >> Run type msconfig into the text field. When the MSConfig utility shows up, click on the “Tools” tab. In the list you should see “Change UAC Setting”. Click on it and select the “Launch” button below. This will give you the UAC windows where you can lower the bar to never notify. After that is done, go ahead and restart. Once restarted, check your OSSEC agent and it should be running!.​


I thought I would write a little about this because anyone who knows me knows I am not a big fan of ColdFusion’s task manager. I think CF lacks hugely in this area, and although they have made some enormous moves forward with the task manager in 10, there are some key areas that they still lack in.

First major area, is logging and alerting. Before 10 when tasks failed, you would get a generic log entry in the admin logs and then nothing else would get done. So for those who depend on their tasks to run, it is kinda nice to know when it doesn’t. Even better, when the task fails it is nice to know where in the script the task failed so that if you are in a cfloop you know what iteration it was on when it failed. Trust me knowing the iteration of a failed task helps reduce time you spend tracking down errors in a task. Many times the problem is just in the data, not the script. So if you are trying to find why your task failed but have no detailed information, you may spend hours tracking down a task to find out that the data that it was looping over in a query was bad.

So I have always been a big fan of writing your own. Task manager. Now I know it sounds like a big job, and in part it is. But when it is done and you see the benefit, trust me it is worth the investment in time. Now the tedious part is getting all the moving parts together. Writing a task manager in CF is not hard, you just need to make sure it has all the bells and whistles you want.

All CF essentially does in the scheduled task manager is a http get on the URL you give the manager, at the interval you tell it to run that URL. That is the basics, so pretty easy right?. This is very easy to replicate in that all you need to do is build an app that stores its tasks in a database. Then write a script that queries those tasks and gets a list together of everything it needs to do an http get for. The script will loop through the query and call the url’s it needs to then exit. This means that all you need to do is setup 1 script that runs the http gets and put that script in the CF task manager.

I call my script ever 2 mins from the CF admin scheduled task area. So when CF runs the script it looks for all tasks that need to run within the next 2 mins. The next step I did was I wrote a custom tag that I use to wrap around my tasks that my CF script is doing a GET on. What this custom tag does is keep track of when it is called by your STM and weather it finished. What I also did was write a second custom tag that logs a custom message based on what you pass in. You can call this custom tag at any area you put it in your task. This allows you to see that if the task fails, you will know what step it was on when it failed.

Now putting together the script that runs all the tasks as I mentioned before can be a bit convoluted, especially If you are trying to replicate all the CF task manager offers plus more. One thing I added on was the ability to run tasks a a set number of days. This means if you want to run a task every 10 days you can. I had also added in grouping before CF 10 came out, because it fit into our business model but is not necessary.

Now the script that runs all the tasks is just that, a script. You will then need to write an app that gives the users the ability to administer tasks. Another benefit to all this, is that if you have a lot of web servers for different environments, like I do, this manager centralizes all your tasks and gives you the ability to customize your logging to fit your needs. I also run a task every morning that just goes out and checks how many tasks failed the previous day, and only sends an email to the appropriate people if there is a task that has failed.

I hope this post has shed a little light for you and reaches someone that can put this to use. Feel free to let me know if you have any specific questions on how this is all put together. Or specific script questions. I will be happy to help.


So approx. 1 week ago i get questions as to why a clients upload is getting cut off when they are uploading a file to our FTP server. Being that it was only 1 client out of the hundred or so we have using our FTP server, I figured there was something wrong on their end or with the routing. I tried to login myself, uploaded files, moved around, did a little dance, voila everything works fine without issue. Well not so fast, customers continue to have issues and I can’t seem to make sense of it. A couple days after that I get another request to look into why a very frustrated customer is uploading a PDF that is being corrupted. Ok so now something has got to be going on. I check the FTP server, uploads work fine without corruption. I check the server that actually host the clients files, no errors reported. Finally my hair recedes another inch and I continue to look.

So to give a 10,000 foot view of the layout, we have 1 Linux VSFTP server which might I add, rocks!.  Attached via CIFS mount is a windows file server that just does what it’s name is, it serves up files. So I being that this is not my first day at the rodeo (” I have done this before”), I have been fairly confident about this setup. The VSFTP server provides the data link essentially to the file server and all is well. Well after looking and looking some more, I began to run a command in Linux some of you might know as Tail.  In running  ” tail -f /var/log/syslog” I started to see errors pop

And as you can see the CIFS VFS: Write2 ret -13, and I think oh! of course this is as clear as mud!.

All joking aside , I looked for several hours to find some kind of solution for this. Nothing shows online, but 1 thing I did determine was that my Linux release even though it was up to date, it was 2 versions behind. With CIFS errors coming from the Kernel, I decided that ultimately the Kernel will need to be recompiled at the very least to fix the issue with VFS. Or I can just update the entire OS to the latest release and be done with this.

On average every 15 minutes our FTP server is being connected to. Not extremely busy but, also does not give me a window large enough to do a full update. So I decided that I would go with an alternate idea that would minimize downtime to about 10 minutes. Drawbacks? I would be spending the next week building 2 Linux servers with VSFTPD and selecting the best Linux OS to support it. I thought, “This can’t be that hard”.  Sorely mistaken I dove into the task and was suddenly met with problems from the new and improved VSFTPD.

One of the newer features I believe that comes with version 2.5.3 and up is the chroot_users does not allow the root folder to be written to.  This I knew was a problem because of how we have our FTP set up, we need to allow users to write to their root. Knowing that the users root is technically not on the FTP server, I can see no harm in allowing this.

Well none the less the issue remains,  however with the new 3.0 version, you can set a config called allow_writeable_chroot. This will solve the issue for you. Only thing you need to do is to compile the 3.o version yourself till all the Linux versions inherit it. After all said and done, moving to Centos with a self compiled VSFTPD version fixed our problem.

%d bloggers like this: