Latest Entries »


I have always regarded forums as a way for 1 persons experience to save the struggle for another. When I started this blog, I set out to offer my own experiences and fixes to help others that may experience the same. So I hope this reaches someone out there and is useful for them.

Last week my IT team spent several hours attempting to resolve an issue where 1 of our MX servers would return 0 new messages when, clearly there were over 60,000 new email between /home/vmail/%mailbox%/MailDir/cur and new. Now, we have a complex mail setup that pulls email from dovecot mailbox and inserts the email into a database for the the application. There are several layers involved, so troubleshooting a system like that can take several hours, going through each layer to determine if the program that picks up the mail was faulty or the dovecot server itself.

Below are the steps i took to troubleshoot the issue.

  • Log into the MX server and run [ # tail -f /var/log/dovecot.log]
  • open email client or start email client program to fetch new mail
  • watch dovecot logs to determine if another program is picking up the mail prior to our mail program is able to fetch the new mail.
  • Rule out another process picking up the mail.
  • watch /home/vmail/%mailbox%/MailDir/new for new incoming mail.
  • once mail is in new queue, run mail client to fetch new mail.
  • At this point i noticed mail being moved from “new” to “cur” while the mail client still reported no new mail to pick up
  • Next I looked at the mail client to see when the last new message was picked up. I made not of the date stamp on the last mail message that was picked up.
  • I then went back to the dovecot MX server and ran [ # ls-l /home/vmail/%mailbox%/MailDir/ ]
  • this returns a few directories and a few key files that are the focus of this blog post. Dovecot.index, dovecot.index.cache, and dovecot.index.log. These files are what dovecot uses to make loading of mail in the mailbox faster. They are an index of the transactions on that particular mailbox. What i noticed was that the dovecot.index file, had the same timestamp as the timestamp of the last successful email that was received by the client. Even though new mail was obviously coming in, the dovecot index was not updating.
  • I then, renamed all three files to dovecot.index.old, dovecot.index.cache.old, and dovecot.index.log.old.
  • I proceeded to restart the dovecot service by running [ # services dovecot restart]
  • Verified that dovecot.index, dovecot.index.cache, and dovecot.index.log were rebuilt after the service restarted
  • then I restarted my mail client and fetched new mail.
  • BAM! 60,000 new messages now are seen and fetched.

I hope this finds usefulness to anyone out there having the same issue. Feel free to leave your comments or questions anytime!. Good Luck!.


I wanted to take the opportunity to share a bit about my experiences over the past year. I apologize for being a bit quiet recently. With new opportunities, comes new challenges. Over the past year, I have had the great opportunity to lead a team of IT admins. It has indeed had its challenges. Mixtures of personalities, business habits, and a rapid changing business model.

Granted when businesses are born, IT teams develop certain habits. As well as good habits unfortunately they also develop bad habits. These habits become SOP (Standard Operating Procedure) when handling implementations, service requests, and deployments. All IT teams go through this, and can be crippling as the business grows. Quality standards suffer, project timelines divert drastically, and SLA’s go un met. It can be a rather impossible task to get a ship back on course once it diverts. as that ship gets bigger, that task becomes more daunting.

It takes a strong leadership and team to realize that a ship has strayed off course, admit there is an issue, and make a clear objective to right it. In fact realizing ways to improve and change our methods as an IT team should never stop, regardless if you are able to correct core issues that may affect the teams deliverables. As the business objectives/needs change, technology advances, budgets shrink and grow, so should the IT department look for ways to improve services.

Other then looking for ways to improve services, we must also be willing to look at our methods as a team and ways to be more efficient. Today businesses constantly are looking for ways to streamline and become more efficient. As an IT team, we should be doing the same. One thing i have noticed, is that the way IT operates sometimes can suffer by the standard of “That is just the way we have always done it” syndrome. as an IT team we can loose our credibility when our methods become ineffective our quality suffers as well as our standards. We need to be able to rip off the bandaid at times, open up the wounds and look objectively for ways to improve rather then blame and perpetuate our problems. Sometimes admitting false and flaws can feel like career suicide. However i must object to this. Some of the greatest leaders of the largest corporations find themselves pointing at themselves for failures. It isn’t that we make mistakes that kills our careers, it is failure to acknowledge and develop a plan to right the failures that will ultimately lead to our demise. Many leaders inherit predecessors flawed plans or objectives. The best level 5 leaders out there are willing to identify and make the drastic and difficult to choice to right the flaws, so that the business can prosper. At all levels, leader or not, shouldn’t we do the same?

Processes I have found, can cripple an IT department if they are over done and not done enough. To many processes can frustrate and demoralize staff when it takes weeks to make a simple IP change due to paperwork. To little processes can stunt a businesses growth because it creates enormous chaos, lack of direction and demoralization as well. Staff will lack drive to improve and invest in the business because suddenly everything is a priority 1. So everything because rushed and half done, leading to a severely suffering quality standard. IT teams will stop growing with the business, and become more reactive to situations because now they cannot keep up.

So time to stop the cycle, rip off the bandaid, collaberate and find ways to improve a crippled system. Stop blaming the process, or lack there of and find a way to fix it. The greatest minds of our generation found ways to solve issues rather then let someone else fix it. They have become successful and legendary because they had a clear objective to get the best and brightest out there team to identify the issues, devise a plan and path to resolve, and then set out to change the course.


My friend over at brain floss gave a very good tutorial on how to expand your ix2-200 StorCenter without loosing your data. the tutorial is linked here. I wanted to give you more information I discovered on this because my ix2-200 was not exactly setup as the instructions. Having some linux background, I was able to get around this and still able to expand my raid drives using BrainFloss’s instructions and other commnnads in linux. so here it goes.

The objective is as follows

  • Log into ssh using root and the password. (the password may be soho + your admin password)
  • grow the physical volume through the pseudo-device.
  • extend the physical volume by re-sizing it.
  • extend the logical volume group attached to the volume group.
  • finally expand the file system on the logical volume for StorCenter to see the available space.

After logging into ssh I found that the steps i needed to take were a bit different. Here are the steps i ran.

  1. Expand the RAID pseudo-device:

    root@ix2-200d:/# mdadm --grow /dev/md1 --size=max

  2. Expand the LVM physical volume:

    root@ix2-200d:/# pvresize /dev/md1

  3. Determine the free space in the LVM volume group:

    root@ix2-200d:/# vgdisplay (this displayed multpile Logical Volumes)

          --- Volume group ---
          VG Name               54d89fg7_vg
          .
          .
          .
          Free  PE / Size       476930 / 931.50 GB
          --- Volume group ---
          VG Name               md0_vg
          .
          .
          .
          Free  PE / Size       0 / 0
  4. The next step will not only require you to reference the Volume Group above, but also the logical volume attached to it. so we need to run the following to get that info
    root@ix2-200d:/# lvscan              ACTIVE ‘/dev/54d89fg7_vg/lv5d49388a’
    ACTIVE ‘/dev/md0_vg/BFDlv’
    ACTIVE ‘/dev/md0_vg/vol1’
  5. Now we can run brain floss’s next command to extend the logical volume according to the results we returned from our system.root@ix2-200d:/# lvextend -l +476930 /dev/54d89fg7_vg/lv5d49388a
  6. Since we did not unmount the volume we can go right to the next step of expanding StorCenter’s xfs file system.root@ix2-200d:/# xfs_growfs /dev/54d89fg7_vg/lv5d49388a
  7. Now just send command to reboot system so that web management can now see the expanded volume.root@ix2-200d:/# telinit 6

Your device will reboot and (if you have email notifications correctly configured) it will notify you that “data protection is being reconstructed”, a side effect of expanding the outermost RAID container.


One question I have seen out on Forums a bit is why application scoped variables do not initialize after being set. The answer lies in the purpose behind the scope itself. What do I mean?, and what is a scope?. CF uses scopes to define where the variable should persist. Variables with no scope are always assumed to be part of the variables scope.

<cfset variables.test = “string”> is the same as <cfset test = “string”>

Variables scoped variables persists only on the page that it is set on; With some exceptions to the rule. Sessions scoped variables can be called/ referenced for the life of the users session. Application scoped variables persist for the life of the application. This means that you can reference #application.datasource# on any page and it will ONLY be initialized when the application is restarted. This can be handy if the value of this variable will remain static for the life of the application meaning till CF services are restarted. Now obviously there are other scopes out there which I can cover in another post, but for now we are focused on the application scope.

Why is it important to use application scoped variables if it only refreshes when the application is started?. Well, look at it this way, you can use <cfset datasource = “test”>, however you will need to define this on every page you call. Not only is it bad practice to do so, but every time it is initialized, you are using up space in memory to reserve for that call.  So if you are building an application to scale at a later date, then you want to make sure your code is utilizing your JVM memory as efficiently as possible. It is very easy to run into memory leaks. One way I have seen this happen is a developer using the  request scope to initialize all there variables. This may not seem so bad, but when you multiply that by the thousands of requests you could get, it can be easy to overwhelm your JVM.

So back to the subject. If you know that certain variables will not need to be re initialized on every request or every session, then place it in your application scope. This is great for variables like datasource or other variables you intend to use globally. The only draw back is that during development if you are changing this value you need to remember to restart your CF instance in order for your application to see the new value. Here is an example of a Application.cfc with application scoped variables.

%d bloggers like this: