Category: Uncategorized



One question I have seen out on Forums a bit is why application scoped variables do not initialize after being set. The answer lies in the purpose behind the scope itself. What do I mean?, and what is a scope?. CF uses scopes to define where the variable should persist. Variables with no scope are always assumed to be part of the variables scope.

<cfset variables.test = “string”> is the same as <cfset test = “string”>

Variables scoped variables persists only on the page that it is set on; With some exceptions to the rule. Sessions scoped variables can be called/ referenced for the life of the users session. Application scoped variables persist for the life of the application. This means that you can reference #application.datasource# on any page and it will ONLY be initialized when the application is restarted. This can be handy if the value of this variable will remain static for the life of the application meaning till CF services are restarted. Now obviously there are other scopes out there which I can cover in another post, but for now we are focused on the application scope.

Why is it important to use application scoped variables if it only refreshes when the application is started?. Well, look at it this way, you can use <cfset datasource = “test”>, however you will need to define this on every page you call. Not only is it bad practice to do so, but every time it is initialized, you are using up space in memory to reserve for that call.  So if you are building an application to scale at a later date, then you want to make sure your code is utilizing your JVM memory as efficiently as possible. It is very easy to run into memory leaks. One way I have seen this happen is a developer using the  request scope to initialize all there variables. This may not seem so bad, but when you multiply that by the thousands of requests you could get, it can be easy to overwhelm your JVM.

So back to the subject. If you know that certain variables will not need to be re initialized on every request or every session, then place it in your application scope. This is great for variables like datasource or other variables you intend to use globally. The only draw back is that during development if you are changing this value you need to remember to restart your CF instance in order for your application to see the new value. Here is an example of a Application.cfc with application scoped variables.


<!—
— Application
— ———–
— author: Matthew
— date: 5/4/14
—>
<cfcomponent displayname="Application" output="true" hint="Handle the application.">
<cfset THIS.Name = "AppCFC" />
<cfset THIS.ApplicationTimeout = CreateTimeSpan( 0, 0, 1, 0 ) />
<cfset THIS.SessionManagement = true />
<cfset THIS.SetClientCookies = false />
<cffunction name="OnApplicationStart" access="public" returntype="boolean" output="false" hint="">
<cfset application.url ="http://url.domain.com">
<cfset application.datasource ="datasource">
<cfset application.smtp ="emailserver.domain.com">
<cfreturn true />
</cffunction>
<cffunction name="OnSessionStart" access="public" returntype="void" output="false" hint="">
<cfset session.variable = "string">
<cfreturn />
</cffunction>
<cffunction name="OnRequestStart" access="public" returntype="boolean" output="false" hint="">
<cfargument name="TargetPage" type="string" required="true"/>
<cfreturn true />
</cffunction>
<cffunction name="OnRequest" access="public" returntype="void" output="true" hint="">
<cfargument name="TargetPage" type="string" required="true"/>
<cfinclude template="#ARGUMENTS.TargetPage#" />
<cfreturn />
</cffunction>
<cffunction name="OnRequestEnd" access="public" returntype="void" output="true" hint="">
<cfreturn />
</cffunction>
<cffunction name="OnSessionEnd" access="public" returntype="void" output="false" hint="">
<cfargument name="SessionScope" type="struct" required="true"/>
<cfargument name="ApplicationScope" type="struct" required="false" default="#StructNew()#"/>
<cfreturn />
</cffunction>
<cffunction name="OnApplicationEnd" access="public" returntype="void" output="false" hint="">
<cfargument name="ApplicationScope" type="struct" required="false" default="#StructNew()#"/>
<cfreturn />
</cffunction>
<cffunction name="OnError" access="public" returntype="void" output="true" hint="">
<cfargument name="Exception" type="any" required="true"/>
<cfargument name="EventName" type="string" required="false" default=""/>
<cfmail to="toemail@mjddesignconcepts.com" from="fromemail@mjddesignconcepts.com" subject="[ERROR]" type="html">
<cfdump var="#EventName#">
<cfdump var="#Exception#">
</cfmail>
<cfreturn />
</cffunction>
</cfcomponent>

view raw

Application.cfc

hosted with ❤ by GitHub

Advertisement

Sharepoint Lessons Learned


So for the past few weeks, I have been living and breathing Sharepoint. I had heard how tortuous the process can be, however I had thought when I was first setting it up that it was just older versions that had been bad and maybe 2013 version was better. Well I was mistaken.

We needed to implement the enterprise version because we needed to use BI with powerpivot, powerview and any other power tools that come with the enterprise version. One of the most time consuming pieces is getting PowerPivot working with Analysis Services and Reporting on an Active/Active Cluster. It is not that it was necessarily hard more than had I known more about Sharepoint before setting it up, It probably would have saved me a lot of headache. All this is topics for later though.

The point of this post was to shed a little light on a distributed Cache issue I was getting for some time. Today I decided to fix it. I first was receiving an error on Sharepoint saying that Appfabric was being disabled due to inconsistencies between cache on other hosts in the cluster. After searching many forums on the core issue that causes this I found a lot of posts telling me just how to fix it. When trying many suggestions out how to fix the issue I had learned some things that I thought I would pass along to all of you.

First of all I will preface this with the fact that the AppFabric  cluster had been running on this host without issue when all of a sudden it became corrupted and shut down. Here are my lessons. Try NOT to touch the Appfabric service at all on the server. If it is disabled the only thing I suggest is change the service to automatic start up again and shut down the service if you receive an error and need to. I had tried to start and stop the service many times while trying different things and I believe that many of the errors i ran into were due to the fact that the AppFabric service was off while i was trying to tie distributed cache back to it. So here are the steps to correct the problem.

  • Look at “Distributed Cache” for the host in question on the Sharepoint CA.
  • If it is Stopped or failed, open Sharepoint PowerShell on the host with the issue and run Remove-SPDistributedCacheServiceIntance.
  • If you receive the following error proceed

pic1

  • Run the following script. Replace the blacked out area with the name of your server

pic2

  • Run Remove-CacheHost If you receive the following error then stop the AppFabric service on the host.

pic3

  • Once you have stopped the AppFabric service you can re-run “Remove-CacheHost”. If it is successful you will not receive a message. You will just get a PS prompt returned.
  • Now run “Add-CacheHost”. This should prompt you for some parameters to enter. The values you enter need to be correct. I found that if you enter an incorrect provider into here, that it will cause the service to fail and you will have a heck of a time trying to figure out why. My suggestion would be to go to another host in the cluster and look up the values for HKLM\SOFTWARE\MICROSOFT\APPFABRIC\V1.0\CONFIGURATION\PROVIDER and Connection String. Many forums stated that the Provider to use was SPDistributedCacheProvider however this caused a problem for me because I found that my cluster was using SPDistributedCacheClusterProvider. Now once these parametes are entered. Your AppFabric should know now how to connect to the database but there is still more to do.
  • Next you need to run Remove-SPDistributedCacheServiceInstance again. This should remove distributed cache on the host for Sharepoint. If you receive the following error, check Sharepoint CA If the Distributed Cache service is still showing in “Services On Server” then the service was not fully removed.

pic4

  • Run the following commands to fully remove the service from Sharepoint. You may receive the error again. That is fine. Make sure you run the last command.

pic5

  • You should receive a Powershell prompt after that. If so, run Remove-SPDistributedCacheServiceInstance again. This time you should get the Powershell prompt without an error.
  • Now run Get-CacheHost you should see your host that is having issues in a down state because Appfabric cannot talk to distributed cache on the host because you just removed it.

pic6

  • Now we will add the Distributed Cache back to the host by running Add-DistributedCacheServiceInstance. If successful you should receive the powershell prompt without an error.
  • Run the following command afterwards   Register-CacheHost –Provider “SPDistributedCacheClusterProvider” –ConnectionString “Data Source={DatabaseName};Initial Catalog=SharePoint_Config;Integrated Security=True;Enlist=False” -Account “{service account used with AppFabric}” -CachePort 22233 -ClusterPort 22234 -ArbitrationPort 22235 -ReplicationPort 22236 –HostName {name of server}
  • That should ensure that the servers Distributed Cache is registered to AppFabric cluster. If registered already you may receive an error stating the port is already in use.
  • If all works out you can now run Get-Cache Host and you should see all your servers with a status of up.

pic7

I found that no one forum in particular had the right answer for me, however with trial and error and using a combination of suggestions on multiple forums, I was able to get it fixed within a few hours. Let me know if I can be of any help and as always come visit us at http://www.mjddesignconcepts.com and get discounts on IT services we provide.


After a lot of troubleshooting a perusing the forums for ideas as to why refreshing data for a powerview widget in Sharepoint conitues to error out I have finally found a fix. We have a Business Intelligence set up to help accounting report on data coming from all different revenue streams. With this the data is complex and in need of a data warehouse.

Sharepoint is an extremely complex system that requires a thorough understanding of how it operates. When mixed with the power and depth of power pivot, powerview, report builder, and design viewer, you have a system that has many moving parts and complex data warehousing that has its own complexities in it’s own. PowerPivot and PowerView depend heavily on SQL analysis services and reporting services to return a feature rich reporting environment. So when looking to refresh data built in PowerPivot sometimes the depth of sharepoint can get you lost, but that is for a later post.

The short of it is this, if you can verify that Kerberos/NTLM is working properly between your systems and that your unattended authentication has the correct permissions to access SQL BI, then look at your data connections tied to your Excel PowerView. This should be located under the “Data” tab in excel and it is how excel connects back to the orignal data source when refreshing data dynamically. Even though you may have uploaded your PowerView to sharepoint to be seen by your team, you original data connections for the view will persist. These need to be correct. If you pulled your Excel sheet from a mounted drive on the network, then this will be represented in your data connection. Not the UNC path. It took me a couple of days of searching around, never suspecting something so simple. Over all lesson has been learned, and now I am passing this on to my followers. Hope this of use.


​Hello all!. I thought I would post something that ran into recentlybecause I found very little information about this issue. Recently I decided to move Sharepoint CA from a WFE to its own server for indexing. When I deployed CA to the new server I found a couple issues happening. 1, when trying to access the CA using the new server name, I was getting redirected to the old server with a failure because the site was shut down. 2, when in the search administration I found that some of the links were also redirecting me to the old CA server. After some searching I found many posts that said to do the following on each WFE and the new CA Server.

  • Go to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Shared Tools\Web Server Extensions\12.0\WSS and change the value of CentralAdministrationURL to whatever you want to be

Well, this did not seem to fix any issues for me. I found that if you go to Central Administration > System Settings > Alternate Access Mappings and change any URL that contains the name of your old CA server to the new name, this should resolve you issue.

Hope this helps!

%d bloggers like this: