Category: Infrastructure



I have always regarded forums as a way for 1 persons experience to save the struggle for another. When I started this blog, I set out to offer my own experiences and fixes to help others that may experience the same. So I hope this reaches someone out there and is useful for them.

Last week my IT team spent several hours attempting to resolve an issue where 1 of our MX servers would return 0 new messages when, clearly there were over 60,000 new email between /home/vmail/%mailbox%/MailDir/cur and new. Now, we have a complex mail setup that pulls email from dovecot mailbox and inserts the email into a database for the the application. There are several layers involved, so troubleshooting a system like that can take several hours, going through each layer to determine if the program that picks up the mail was faulty or the dovecot server itself.

Below are the steps i took to troubleshoot the issue.

  • Log into the MX server and run [ # tail -f /var/log/dovecot.log]
  • open email client or start email client program to fetch new mail
  • watch dovecot logs to determine if another program is picking up the mail prior to our mail program is able to fetch the new mail.
  • Rule out another process picking up the mail.
  • watch /home/vmail/%mailbox%/MailDir/new for new incoming mail.
  • once mail is in new queue, run mail client to fetch new mail.
  • At this point i noticed mail being moved from “new” to “cur” while the mail client still reported no new mail to pick up
  • Next I looked at the mail client to see when the last new message was picked up. I made not of the date stamp on the last mail message that was picked up.
  • I then went back to the dovecot MX server and ran [ # ls-l /home/vmail/%mailbox%/MailDir/ ]
  • this returns a few directories and a few key files that are the focus of this blog post. Dovecot.index, dovecot.index.cache, and dovecot.index.log. These files are what dovecot uses to make loading of mail in the mailbox faster. They are an index of the transactions on that particular mailbox. What i noticed was that the dovecot.index file, had the same timestamp as the timestamp of the last successful email that was received by the client. Even though new mail was obviously coming in, the dovecot index was not updating.
  • I then, renamed all three files to dovecot.index.old, dovecot.index.cache.old, and dovecot.index.log.old.
  • I proceeded to restart the dovecot service by running [ # services dovecot restart]
  • Verified that dovecot.index, dovecot.index.cache, and dovecot.index.log were rebuilt after the service restarted
  • then I restarted my mail client and fetched new mail.
  • BAM! 60,000 new messages now are seen and fetched.

I hope this finds usefulness to anyone out there having the same issue. Feel free to leave your comments or questions anytime!. Good Luck!.


​Hello all!. I thought I would post something that ran into recentlybecause I found very little information about this issue. Recently I decided to move Sharepoint CA from a WFE to its own server for indexing. When I deployed CA to the new server I found a couple issues happening. 1, when trying to access the CA using the new server name, I was getting redirected to the old server with a failure because the site was shut down. 2, when in the search administration I found that some of the links were also redirecting me to the old CA server. After some searching I found many posts that said to do the following on each WFE and the new CA Server.

  • Go to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Shared Tools\Web Server Extensions\12.0\WSS and change the value of CentralAdministrationURL to whatever you want to be

Well, this did not seem to fix any issues for me. I found that if you go to Central Administration > System Settings > Alternate Access Mappings and change any URL that contains the name of your old CA server to the new name, this should resolve you issue.

Hope this helps!


​This seems to be the plaguing discussion among many companies today. My opinion?, there is no wrong answer, however there is alot to consider before signing up for services. I wanted to take a little bit of time to expose some of the questions you should answer before deciding to go with a hosted solution. Deciding who will handle your business critical infrastructure can be a very stressful decision to make. Many hosting companies out there will go the extra mile to sell themselves to you . But if you dont know what to ask and what to look for, you may end up locked into a contract you will be very unhappy with. There are alot of business owners out there who have a difficult time with there desktop, let alone deciding what you need for business critical applications in your company. I hope this post will be able to help all of you decide what direction to head without having to deal with sales pressure. Cause if you are like me, i have no patience for sales pressure. So lets dive in.

Some questions you should answer before starting to look for a hosted solution.

What is your application exposure?.

Is your customer base international or national?. Obviously web applications have international exposure inherently. However if you know that you have a large consumer or customer base in other countries that put alot of demand on your applications then you may need to consider more strict uptime requirements. If your application is primarily national or maybe just an internal use only, then you will have a bit more flexibility.

Unfortunately you need to consider that the more 9’s you have in your uptime, the more money you will need to be willing to spend to ensure you are able to maintain it. For many businesses, it is acceptable to have outages at any given time. Now obviously we all dont’ want to have our services to be going down frequently throughout the day, but for some it is not a big deal if your servers need a good reboot during the middle of the day.

What is your uptime requirements?. 

​This question ties into our first question and is very important to be able to answer. If your core business depends on your applications you have on the internet, then you need to look at this very closely but be realistic. Most people on the internet when they go to a web page will refresh their webpage if they get an error. so it is unrealistic to expect that 1 your servers will never go down and 2 that you can maintain 99.999% uptime with very little upfront cost. so to answer this question it would be best probably to be able to answer this. How much of your business would be impacted if your sites went down 5 minutes? then how much for 10 minutes, 15 and so on. The more time you are willing to accept. the cheaper the cost will be to maintain and keep your systems up.

why does keeping servers up cost so much?. Because you need to be able to deal with the fact that eventually your servers will go down. When they do you will need to restore backups possibly to get your data back or you will need to have another server running that is ready to take over when it goes down. if your acceptable loss of data is 5 minutes and your down time can be nore more then 3 minutes, then you will need to be making incremental backups every 5 minutes and you will also need to deploy a load balancer that will be ready to send traffic to another server if one goes down. The more data you have, the more it will cost to backup every 5 minutes. depending on the amount of data you may need to look at an enterprise type backup and restore solution to ensure you will be able to get your data back within the alotted time.

This only covers if you loose a single server. So there are many other disaster scenarios you may need to consider in this whole picture. I would suggest trying to make a list of all the possibilities you can think of, then write out how you may protect yourself against it.

Now i know this already seems to be an obvious choice to go with a hosted solution. Don’t forget though, even hosting companies have major outages that you will need to make a plan for. Look up amazon and see what kind of outages they have had. Now i am not saying they are bad, because i think they are very exceptional with there service but they like any hosting company suffer from outages. Everyone does.

How much data do you have?.

Plain and simple, the more data you have whether it is hosted or not, the more it costs to store it. We all have a fear of getting rid of data so we inherently keep everything. This is not only dangerous but expensive. Consider moving data that you don’t need right away to a tape backup or a large NAS drive that has cheap slow drives in it.

Remember it doesn’t just cost to keep data on your servers, but it also costs to back it up. Depending on how often you need to backup, you may eat up alot of space and money before you know it.

Do you have any legal or SLA requirements for backing up your data?.

Some businesses through litigation or merely through what they guarantee in their contracts need to maintain a rather large library of data. Being able to restore a file back to any point in time may be critical for you, but it costs. If you are willing to accept a file but may be from the previous day or week, then you will definitely save some money there. This may mean that you loose a little bit of data in the file but at least not all of it is lost.

what applications do you need to run?.

Some web applications are more intensive then others. Namely your databases are. These are applications that store all your dynamic data on your web page, and can require alot of CPU and memory. In hosted environments, this can get costly when considering your options.

do you need to adhere to PCI DSS, SOX, HIPAA, GLBA, FISMA compliance?.

Many hosting companies these days adhere to many of the compliance standards out there, but depending on what compliance you need to adhere to will depend on where or if you are able to use a hosting company. If you you are able to, then consider the fact that the more strict the compliance standard is the more it will cost to host your data. This is all due to the amount of security measures that need to be in place, the amount of backups provided to recover this data, and even the encryption of sesitive data that is stored.

Virtualization?

This is my favorite section, because virtualization is an awesome technology. It allows you to get more out of the hardware you own. By deploying multiple servers on one physical box, you can save yourself thousands of dollars. Now virtualization has a heavy up front cost but defintley has a high ROI. It also allows you to deal easier with disaster recovery plans, uses less power then equivalent number of servers and takes less man power to maintain the hardware. This all in turn saves money.

 

If you are able to answer the majority of these questions, then i think you should be well prepared to start conversations with hosting solutions. All hosting companies should be able to give you a cost associated with each of these questions. Most of all what you should take away from this is an old saying “Dont put all your eggs in one basket”. If you need to ensure your sites are not down for extended periods of time, consider a hybrid approach where you purchase your primary equipment to run your day to day applications and servers. Then utilize a hosting company to act as your disaster recovery site. If you want to just strictly use hosting companies to host your solution, maybe because the initial cost of purchasing your own is to much or finding the right person to support it is to daunting. Consider looking at multiple hosting providers that utilize different ISP’s incase there is ever an issue with a particular ISP. Using different hosting providers, solves some of the major news worthy outages that have plagued some of the larger hosting providers. By having redundancy in all your systems helps to ensure you that your systems will be up and ready in the event of disaster or even minor outages.

You can’t plan for everything that may happen, but the more you plan for the better you will protect yourself and your company. Hosting companies will be there to sell you what you have, hopefully this prepares you a little to know what to ask them and how they may respond. Hosting companies need there maintenance windows to upgrade or troubleshoot their hardware, be aware of these clauses and ask al about what they do in the event that they need to take down major equipmet to replace. Most of all, get it in writting.

I hope there is someone out there that this reaches and helps. I know this can be a touqh discussion to have and even think about. So dropme a comment or email and ask as much as you would like. We are here to help anyone we can as well as provide services.


Anyone who knows me probably knows this is one of my biggest pet peaves. I see this happening everyday, but from a large site like LinkedIn.. shame on you. As all of you probably know by now, a post was made to a Russian site of more then 6.5 million passwords from LinkedIn attempting to crack the SHA-1 hashed passwords.

Now for all of you who have gotten into encryption / decryption of sensitive data, I am sure you understand the problem here with just hashing a password with SHA-1 encryption without salting. If a hacker is able to determine the algorithm used to hash the password, then the rest is fairly easy. The simpler passwords can be uncovered fairly quickly, leaving the more difficult ones which when broadcasted to a hacker community, will take no time to be uncovered as well.

Salting a password is a process of adding in random bits into a hashed password. This makes it much more difficult for an attacker to decrypt because they would need to not only be able to hash their dictionary passwords, but then they would have to try each of those passwords with a variations of salts. This can fill up their data and computing power quite fast creating a very expensive and difficult task for a hacker. They would need a large budget along with tons of storage space and computing power to be able to break down a salted and encrypted password. Where as with just a SHA-1 encrypted password, an attacker would only have to Hash each password trial once and compare it to the value of the encrypted password they have. Once the value is not matched they can just move on to the next dictionary hashed value instead of salting variations of the same password.

So as you can see that the time and expense to decrypt salted passwords can be astronomical and even a deterrent for a hacker attempting to grab hold of your data. Now with all the problems these days with information being leaked or stolen, you would think that a site like LinkedIn would have been more mindful of the world we live in and not assume that the password obscurity they are using is “Good Enough”. If in fact the hackers have usernames that are tied to these passwords, then the truth of it is most people re use their usernames and passwords. So if a hacker grabs your information for one site, then they could potential try to use that same combination on other sites you may visit. Thinking that the breach that has happened to LinkedIn is focused solely on that site, is ignorant. Don’t fool yourselves, change your passwords everywhere you use it.

My recommendation to you is to alter your passwords. Try not to reuse a password many times. Come up with a methodology of using your passwords so that if you have to reuse them then you can derive a system of which passwords you use where. This will make it more difficult for someone to attempt to reuse your password to gain access to more of your personal information.

%d bloggers like this: