Quantcast
Channel: K2 – Mike's Blog
Viewing all 93 articles
Browse latest View live

Switching SP2010 from Classic Mode to Claims Mode Authentication

$
0
0

SharePoint Server 2013 uses claims-based authentication as its default authentication model, and it is required to enable its advanced functionality. Using claims-based authentication has the following advantages over using Windows classic-mode authentication:

  • External SharePoint apps support. App authentication and server-to-server authentication rely on claims-based authentication. With Windows classic-mode authentication you are unable to use external SharePoint apps. You also cannot use any services that rely on a trust relationship between SharePoint and other server platforms, such as Office Web Apps Server 2013, Exchange Server 2013, and Lync Server 2013.
  • Claims delegation without “double-hop” limitation. SharePoint can delegate claims identities to back-end services, regardless of the sign-in method. E.g., suppose your users are authenticated by NTLM authentication. NTLM has a well-known “double-hop” limitation, which means that a service such as SharePoint cannot impersonate the user to access other resources on behalf of the user, such as SQL Server databases or web services. When you use claims-mode authentication, SharePoint can use the claims-based identity token to access resources on behalf of the user.
  • Multiple authentication providers per one web application. When you create a web application in claims-based authentication mode, you can associate multiple authentication providers with the web application. It means, that, for example, you can support Windows-based sign in and forms-based sign in without creating additional IIS websites and extending your web application to additional zones.
  • Open standards. Claims-based authentication is based on open web standards and is supported by a broad range of platforms and services

There are several supported scenarios for migrating or converting from classic mode to claims mode authentication which performed with use of a number of Windows PowerShell cmdlets: you either switch your web apps on SP2010 before upgrade to SP2013 or you can convert SharePoint Server 2010 classic-mode web applications to SharePoint Server 2013 claims-mode web applications after you have SP2013 installed already.

Steps to switch your SP2010 web apps to claims based authentication:

1. Enable claims authentication for your web app.

$WebAppName = "http://portal.denallix.com" 
$wa = get-SPWebApplication $WebAppName 
$wa.UseClaimsAuthentication = $true 
$wa.Update()

2. Configure the policy to provide the user with full access.

$account = "Denallix\Administrator" 
$account = (New-SPClaimsPrincipal -identity $account -identitytype 1).ToEncodedString() 
$wa = get-SPWebApplication $WebAppName 
$zp = $wa.ZonePolicies("Default") 
$p = $zp.Add($account,"PSPolicy") 
$fc=$wa.PolicyRoles.GetSpecialRole("FullControl") 
$p.PolicyRoleBindings.Add($fc) 
$wa.Update()

3. Perform the migration.

$wa.MigrateUsers($true)

4. Provision claims

$wa.ProvisionGlobally()

Once done you with these changes may verify that you are using Claims Authentication for your web application:

GUI way. In Central Administration navigate to web application management, select your Web Application and click on Authentication Providers button:

SP 2010 check web app authentication mode 01

It will open a window where you can verify your default authentication mode:

SP 2010 check web app authentication mode 02

 PowerShell way:

$web = Get-SPWebApplication "http://portal.denallix.com" 
$web.UseClaimsAuthentication

It will return True or False depending on whether you have Claims Authentication enabled or not (screenshot below for enabled state):

SP 2010 check web app authentication mode 03

In case you have K2 components installed you may need to perform relevant configuration changes on K2 side (see Claims Authentication Configuration section at help.k2.com) which I will cover in separate blog post.

In case if you are in a mood for deep dive into what & why of claims authentication subject you may read through the following articles:

Identity (Management) Crisis (Part 1): The evolution of identity concepts

Identity (Management) Crisis (Part 2): Everything you (think you) know is wrong

Identity (Management) Crisis (Part 3): Solving the Identity Problem

Identity (Management) Crisis (Part 4): Selecting a Comprehensive Identity Management solution

Claims Based Identity: What does it Mean to You? (Part 1)

Claims Based Identity: What does it Mean to You? (Part 2)

Claims Based Identity: What does it Mean to You? (Part 3)

facebooktwittergoogle_plusredditpinterestlinkedinmail

K2 blackpearl Installation: Configuring MSDTC properties

$
0
0

Disclaimer: in case you have no difficulties to say how to access Local DTC properties on Windows machine without googling or trying hard to remember it then you don’t need to read this post. :)

I was building retro test environment with K2 4.6.2 installed and run into the following warning raised by K2 Setup Manager – “MSDTC Network Access options not set correctly”:

MSDTC Network Access 1 Warning

K2 Setup Manager quite explicitly tells you about what you have to do, detailing up to the level of exact check-boxes you have to have checked. Unfortunately 4.6.2 can’t repair this without your intervention (I assume recent versions too, bun need to double check this). But anyhow it is not difficult to guess that correct settings supposed to look like that:

MSDTC Network Access 2 Correct Settings

Once required check-boxes in place this warning resolved, just click “Analyze” after you have done these changes:

MSDTC Network Access 3 Warning Cleared

Looks straightforward enough, but what may be a question/issue here is that how to access these “Local DTC properties”? (maybe there is an answer in help hyperlink mentioned by Setup Manage in the same warning?)

Anyhow without looking/googling I keep forgetting how to access this dialog, so decided to jot this down. You have to run the Component Services MMC snap-in:

MSDTC Network Access 4 Component Services MSC

In this MMC snap-in you just locate “Local DTC” and select its properties from context menu. And to invoke this you just run this:

comexp.msc

or

dcomcnfg

In case location matters for you it is located under: %SystemRoot%\System32

facebooktwittergoogle_plusredditpinterestlinkedinmail

Older versions of IE no longer supported by MSFT starting from 12.01.2016

$
0
0

Microsoft announced that it ends support for old versions of IE 12.01.2016, this means that:

– The only Microsoft supported browser starting from this date is IE11, and only it will continue to receive security updates, compatibility fixes, and technical support on Windows 7, Windows 8.1, and Windows 10.

– IE 8/9/10 are no longer supported, i.e. they will work but given the fact that there will be no security patches or other fixes will be provided using this in enterprise (or IMO at home also) doesn’t seem to be a good idea.

Microsoft communicated that this change will take place about an year ago, but as usual some companies will be unprepared for this, as it was the case for end of support for Windows XP, for example (US NAVY paid $9m to MSFT and still(!) continues to receive support for XP). See related article on techrepublic.com: Internet Explorer: How Microsoft scaling back support is leaving big orgs playing catchup.

The only problem here is that some older versions of Windows can’t upgrade to this browser but those versions of Windows itself reached end of support. Though there some intrepid enterprises who not going to do away with XP, I think it is inevitable as it really becomes not cost effective anymore how strongly you don’t want to pay for upgrade or avoid “pain” of migration.

I guess MSFT tries to focus more on quality and speed of their browser, as backward compatibility burden goes largely under-appreciated by general public which tends to criticize MSFT browser performance severely just not realizing that it is an iceberg with its largest part heavily shaped by backward compatibility burden is hidden under water and affect it agility. So MSFT kind of doing great job for their Enterprise customers but in the end everybody displeased by performance and other stuff doing their comparison with no backward-compatibility burden whatsoever which roll-out updates in high-frequency DevOps fashion.

In case you are using K2 smartforms it is also affect you as K2 compatibility matrix going to reflect that. It doesn’t mean that K2 smartforms suddenly stop working in old browsers, but it does mean that K2 also stop to release fixes or patches for these older versions of Internet Explorer. I.e. K2 support still going to assist you with troubleshooting issues you may face on older versions of IE and helping with finding possible workarounds to those, but no fixes or patches will be built for newly found bugs – in such cases you will be required to upgrade to IE11.

Those changes from K2 side are driven by Microsoft support policy change as K2 works on top of Microsoft technologies and tend to focus on quality and build components for supported/current versions of Microsoft technology stack components.

Please refer to official K2 Technical Bulletin communicating this change. You may see that compatibility matrix for K2 smartforms is also updated and both Design and Runtime Browser sections get new footnotes for IE 8/9/10:

End of support for old IE versions

P.S. But we may also see some backlash and corrections from MSFT side maybe. It seems that there was something similar with decisions like EoL for InfoPath or releasing SharePoint as cloud only product which were reconsidered… Though I think with IE it is more justified for MSFT to stick to this decision.

facebooktwittergoogle_plusredditpinterestlinkedinmail

Removing K2 for SharePoint app

$
0
0

It has been a while since my last K2 realted post, but it is not because there is nothing to write about, it is just a bit difficult to allocate a time slot for writing. Honestly, with K2 set of technologies you not only have “marketing” promise of BYOA but indeed you do can create your own K2-based app fast and with very gentle learning curve, but at the same time when your work is to help various people on the different points of their “gentle learning curve” with K2, then this platform can give you a ride on a quite steep learning curve :) I mean there is ton of stuff to learn with array of use cases, design options and integration capabilities, sort of “fasten your belts” we going to move quickly type of scenario :)

One of the basic things which seems to be constant cause of confusion and support cases is related with correct uninstallation of K2 for SharePoint app from SharePoint 2013. Don’t get me wrong installation is important too, but once you installed app and started to create artifacts there are extra things to care about when you do need to uninstall your app for one or another reason. But let me elaborate on this and some related points in the following paragraphs.

First things first. SharePoint 2013 is different in terms of app development options, so K2 integration with SharePoint 2013 is also differs from what you have for SharePoint 2010. Important things there that now you have to deploy K2 for SharePoint app in your SharePoint 2013 site and all the management of K2 artifacts happens within SharePoint interface by means of pervasive K2 Application button readily available for you on ribbon:

Pervasive Application Button

This is your primary way of creating and deleting K2 artifacts in SharePoint 2013. Process of creating K2 artifacts in SharePoint 2013 called “appifying“: you use K2 application against SharePoint items to appify them. So here you have 3 key terms:

K2 artifacts – SharePoint 2013 based K2 SmartObjects. Appify (verb) – create SharePoint2013 SmartObject from SharePoint item (list, library etc.). And there is antonym of that, meaning that whatever you appified can be deappified.

So once you selected SharePoint 2013 item worth creating K2 SmartObject being used in K2 workflow you click on Application button and initiate appification process which looks approximately like this:

Appify1

Appify2

You may wonder why I dwell on such trivial things as this new terminology? Because I want you to be very clear on this specific point: Appification process do lead to creation of SmartObject which you will be able to see in Tester Tool but you SHOULD NEVER manage or delete SharePoint 2013 SmartObjects using Tester Tool, apart when you are in a mood for complex support case :) Once again deleting/editing SP 2013 SI in tester tool is not supported and will furnish you with troubles you don’t want to have.

I hope that passage on terminology will help you to memorize this. Now to antonyms :) Meaning why deappifying is important. As use of Tester Tool is not supported for managing SP 2013 service instance you have to deappify SharePoint items which had been exposed to K2 (read appified) before deleting such SharePoint items. If you fail to do this you will end up with such unwelcome guests as orphan SharePoint 2013 SmartObjects, which you don’t want to have in your environment. Deleting SharePoint item which had been appified? – De-appify it first! It is simple – click on that pervasive K2 Application button on a ribbon of object which you want to de-appify and delete created K2 artifacts:

Deappifying Single Item

Now to the topic of uninstall of K2 for SharePoint app from your SharePoint 2013 site. It should be clear by this point that it involves de-appifying off all appified items first. And knowing the way we treat documentation I will start from what not to do. Do not do this:

Wrong First Step

Now when you clear on what not to do, I can afford myself to add some details/explain why. Never do this Remove as a first step of K2 App uninstallation process unless you absolutely sure that no K2 artifacts have been created for this sate (nothing was appified) otherwise you will end up with orphan SmartObjects. Or to keep things simple never do this firs but learn correct process of removing K2 for SharePoint app which removes K2 artifacts (right order of steps is crucial here):

Step 1. Removing K2 artifacts from a SharePoint site. You can do this by means of Uninstall link under the General heading at K2 for SharePoint Settings page. To access K2 for SharePoint Settings page you either hover your mouse on K2 for SharePoint app icon and click on ellipses which appear in the top right corner of its tile, this will bring up pop out menu from which you have to click on SETTINGS (so 2 clicks involved here):

Accessing K2 for SharePoint Settings 1

One click method to access K2 for SharePoint Settings page is to click on K2 app logo/ inside of “black square”:

Accessing K2 for SharePoint Settings 2

From K2 for SharePoint Settings page you have to click on Uninstall link:

Uninstall

And as you can see from the warning you get this process will remove all K2 artifacts from the site:

Step 1 Uninstall

Step 2. Uninstalling the K2 for SharePoint components from the K2 environment. To accomplish this you run K2 for SharePoint Setup Manager from the Start menu and select the Remove K2 for SharePoint.

Why I describe this process in such details is because that Remove button unhappily filtered out in UI and it is very tempting to click on it :) The problem is that it won’t remove K2 artifacts and if you subsequently remove appified SharePoint items (there will be no way of deappifying them) you will end up with orhpan SmartObjects in your environment.

And of course it is documented nicely by K2 – Uninstall is described in Maintenance section of K2 for SharePoint Installation and Configuration Guide. But not all useful things filtered out as well as some dangerous UI buttons at times. So I hope these explanations may help at least someone / bring couple of little points to your attention before you run into issue/log a support case because of not doing your K2 for SharePoint app uninstall properly or because of plainly deleting appifyed SharePoint items without deappifying them first.

Of course there is more in-depth things to learn about integration between K2 and SharePoint. For example you may start from KB001707 K2 for SharePoint Component Compatibility and it is only beginning if you want to dive in into technical details, but as usual worth of investing your time before you invested heavily in building your solution without doing your homework on compatibility and supportability.

facebooktwittergoogle_plusredditpinterestlinkedinmail

Unable to start Message Queuing service: “The Message Queuing service terminated with service-specific error %%-1072823311″

$
0
0

As you may know MSMQ (see WikipediA or MSDN for details) is one of prerequisites for K2 Server installation, to be more specific it is necessary for you to have the following components installed:

Microsoft Message Queuing (MSMQ) Services

  • Message Queuing Server
  • Directory Service Integration

Microsoft Message Queuing or MSMQ is a message queue implementation developed by Microsoft and deployed in its Windows Server operating systems since Windows NT 4 and Windows 95. MSMQ is essentially a messaging protocol that allows applications running on separate servers/processes to communicate in a failsafe manner. A queue is a temporary storage location from which messages can be sent and received reliably, as and when conditions permit. This enables communication across networks and between computers, running Windows, which may not always be connected. By contrast, sockets and other network protocols assume that direct connections always exist.

Surprisingly enough you not only have to have required MSMQ components installed but also you have to have them in a working state. :) For example MSMQ is a requirement for working Notification Events which provide the functionality to notify via e-mail when specific events are executed on servers, implementing a custom event record. The queuing of events is processed using MSMQ. Transactional queue’s  are received from the client recorder to be persisted to the Event database.  The Event database receives events from the Queuing System (MSMQ) and saves event mappings to the database for processing. (see details here). Both Notification Events and E-mail Events in K2 are depending on MSMQ.

The other day I had a case when K2 server was restored from backup and it was noticed that notification events does not work, K2 Setup Manager was run to double check email related settings and it raised a warning about MSMQ. First of all I confirmed that MSMQ components are installed and they were in place, so I attempted to start MSMQ service but if wailed to start with the following error message:

The Message Queuing service terminated with service-specific error %%-1072823311

As a message text suggests it is worth checking application specific logs, which in this case may be found in Windows Event viewer. In this specific case I was able to see the following event logged upon each attempt to start MSMQ service:

Event ID 2078 — Message Queuing Logging and Checkpoint Events

The Message Queuing service cannot start. The checkpoint files cannot be recovered. Error %1: %2

The rest was easy as MSFT provides guidance/details on this, see Message Queuing Logging and Checkpoint Events.  Event ID 2078 normally occurs when there is a failure between the time that the checkpoint file was created and when the QMLog was updated with the new version, the QMLog file refers to the earliest checkpoint file version and recovery fails. So this is something you may run into after system restore. Resolution to this is the following:

1) Delete all the checkpoint files, as well as the QMLog file in the Message Queuing storage directory (%windir%\system32\msmq\storage, see Message Queuing Message and Data Files for details). This can result in some messages being duplicated. However, this resolution will get the service running as soon as possible and usually without data loss.

2) Open registry editor and navigate to:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSMQ\Parameters

registry hive and there locate LogDataCreated parameter and ensure that its value is set to 0.

3) Try to start Message Queuing service – it should work now.

Link to related MSFT documentation with more detailed steps:

Event ID 2078 — Message Queuing Logging and Checkpoint Events

While working on this I also noticed that strange/confusing issue when MSMQ is installed on Windows Server 2008 R2 box but no relevant management node/snap-in was available in Server Manager but this is a separate issue worth separate investigation (some relevant reading is here and here), but probably just re-registering the MQSNAP.DLL can fix this.

facebooktwittergoogle_plusredditpinterestlinkedinmail

K2 Host Server Logging – MaxLifeTimeSpan

$
0
0

There is enough of documentation about K2 logging as well as blog post which cover how to enable and configure different logging types available in K2: host server logging, ADUM, SmO… For example there is a very good blog post “HOW DO I USE LOGGING IN K2?” which covers most of the things logging related.

Working on all sorts of support cases I most frequently have to switch around settings in ApplicationLevelLogSettings section of  HostServerLogging.config file:

K2 ApplicationLevelLogSettings section

Usually it is all about temporarily raising logging level so that we have more details logged for troubleshooting purposes. In case you have hard time remembering what level is maximum this picture may be helpful for you:

K2 logging levels

As usual when you use something routinely it makes you oblivious about other options and things which are readily available for you. In a way it is like that little known phenomenon of Hammer’s Bias (aka the law of the instrument/golden hammer) which is also largely forgotten idea of Abraham Maslow whose pyramid model enjoys the most of the limelight. So the other day I run into question on whether it is possible to roll over/cycle K2 host server log let’s say daily and whereas I clearly remember that by default it is being rolled over on each service restart I’m not sure if it has such settings. Quick glance at documentation brought to my attention extension specific sections in config which I largely ignore in my troubleshooting sessions – log extension specific properties:

K2 Extension specific log properties

In particular there is MaxLifetimeSpan property where you can set days/hour/minutes/seconds value which specifies time after which the log file cycles.  e.g. if you set the value to “1:0:0:00”, it will cycle every 1 day. Note that it works in combination with MaxFileSizeKB property, i.e. log file cycle happens depending on which condition become TRUE faster.

Now why it is so interesting/important. I think it may be a good idea for those who do operational support of K2 servers to configure their logs to cycle every 24 hour and restart their service at 00:00:00 (adjust MaxFileSizeKB so that 24 hours logging volume always fit into this size by adding generous safety margin). This will enable you to have very neat log files archive which will be a pleasure to work with as it is very easy to review logs for specific day as well as see the difference for specific date between night hours and business hours.

facebooktwittergoogle_plusredditpinterestlinkedinmail

AD DS infrastructure failures and K2

$
0
0

I recently worked on a number of cases where clients complained about errors on K2 side caused by failures on AD DS side. Specifically there were some suggestions that K2 was unable to handle partial outage of AD DS, namely failure of single DC while there are other DCs available. So based on recent cases I saw I did some research and you may find the results of this research below. It is rather a long form write up which may require some updates/edits afterwards but I decided to post it to share this information with wider community as well as to keep all these notes for my own reference in case it will be necessary for me to revisit these findings.

DISCLAIMER: Some flaws in interpretation of system behavior described below are possible, those will be edited/corrected when/if necessary.

Symtoms/What you may see in your K2 environment when there are some issues with your AD DS infrastructure

Most of the applications being used in Microsoft Active Directory networks have certain degree of dependency on availability of Active Directory Directory Services (AD DS) for purposes of authentication and for obtaining required data about directory objects (users, groups etc.).

In case you have failures or other availability issues with your AD DS infrastructure you may observe symptoms/problems on K2 side similar to those described below.

Example scenario 1 (WAN link outage/no DCs are reachable to serve queries against remote domain)

You may observe growing queue of AD SmO queries on IIS side to the point at which all of the queries sent from K2 smarforms to AD DS fail/no longer returning any information and after a long delay the following error message is being thrown:

A referral was returned from the server.

This error comes from AD DS (more specifically from DC which serves K2 app/K2 server queries to specific domain) and most likely caused by the fact that there is no domain available to serve this query at all.

Example scenario 2 (single DC failed, other DCs are available)

You receiving the following error on K2 server:

System.DirectoryServices.ActiveDirectory.ActiveDirectoryServerDownException: The server is not operational. Name: “DC-XYZ.domain.com” —> System.Runtime.InteropServices.COMException: The server is not operational.

You confirmed that DC mentioned in the error message is down but there are other DCs up and running in this domain.

Example scenario 3 (it could be 2b, but you see that only K2 smartforms are affected)

You may see the same error message as in scenario 2, i.e.:

System.DirectoryServices.ActiveDirectory.ActiveDirectoryServerDownException: The server is not operational. Name: “DC-XYZ.domain.com” —> System.Runtime.InteropServices.COMException: The server is not operational.

But you also see that both K2 workspace and base OS working just fine and using alternate DC, but K2 smartforms keep throwing an error which mentions failed DC (which is indeed failed).

All described scenarios are slightly different but in all of these cases it may seem that one K2 didn’t switch to alternative available DC for specific domain. Key question/requirement here is to switch to another available DC without any downtime or with a minimum downtime (no K2 server or K2 service restart).

Research and general recommendations

First of all it is necessary to understand what kind of dependency on AD DS we have from K2 side. Most obvious things are  AD Service SmOs and User Role Manager Service (URM) – both of them dependent on AD DS availability but in a different ways. AD Services queries AD DS directly (so it is a good test to check whether AD DS queries can be served without issues) whereas URM service relies on K2 identity cache and returns you cached data from K2 database. URM service return data from multiple security providers registered in K2 and it stores cached data in Identity.Identity table in K2 database. URM service is dependent on AD DS only at the time of cached data refresh thus it will allow you not to notice AD DS failure if your AD DS cache is not expired yet.

In the beginning of this blog post we mentioned two major scenarios for AD DS failure (with third type which can be qualified as a sub-case of (2)):

1) WAN link failure when no domains are available to serve K2 request to specific domain because all DCs are behind the WAN link. This is applicable to multi domain environments with remote domains.

2) Failure of specific DC to which K2 server is connected for querying specific domain.

Given AD DS design best practices none of those scenarios should present any problems for applications dependent on AD DS:

(1) It is best practice to have extra DCs placed on remote sites so that there is no dependency on WAN link to preserve link bandwidth and safeguard against availability issues. At the very least RODC should be present locally on site for any remote domain if for some reason you can not place RWDC locally on each remote site.

NOTE: in a link failure scenario when there are no locally available DCs there is nothing that can be done from K2 side, it is a question of restoring WAN link or placing locally available RWDC/RODC to mitigate against this scenario.

(2) Golden rule and requirement for any production AD DS deployment is to have not less than two DCs per domain. So failure of one domain controller should not present any issues.

Now separately on scenario (3) when you getting the same error as in scenario (2): “System.DirectoryServices.ActiveDirectory.ActiveDirectoryServerDownException: The server is not operational. Name: “DC-XYZ.domain.com” but you clearly see that your base OS and K2 workspace using alternate available DC where as K2 smartforms keep throwing an error which mentions failed DC. With high probability you may see this error with K2 4.6.8/4.6.9.

In this specific scenario you clearly see that K2 workspace works fine at the time when you have this issue with K2 smartforms. This is because Designer, Runtime and ViewFlow web applications in K2 are using the newer WindowsSTS redirect implementation (http://k2.denalilx.com/Identity/STS/Windows) which was introduced in 4.6.8 whereas K2 Workspace still uses “Windows Authentication”.

I.e. you may see that K2 workspace uses windows authentication and in its web.config file ADConnectionString is configured as “LDAP://domain.com”, for WindowsSTS K2 label is being used, i.e. “LDAP://dc=domain,dc=com”

You may see aforementioned error occurring on the redirect to “http://k2.domain.com/Identity/STS/Windows/”

There is also a known issue with Windows STS implementation in K2 when exception on GetGroups causes user authentification to fail on Windows STS which was fixed in 4.6.10 but there is still open request to improve error handling with the aim to catch exceptions caused by temporary unavailability of DC and then have STS retry again. So that in cases where the DC is inaccessible for a short interval for unknown reasons the retry will then connect successfully.

So in scenario (3) you will likely see that DC locator is switched to alternate DC but Windows STS not performing switch/retry after temporary DC failure. It is something I need to research more, but it seems that in this case you have to restart K2 service to get back to normal operation of K2 smartforms.

Irrespective of scenario (maybe apart from scenario (3)) first point to check when you see any such issues is to work with your AD DS team to clarify which specific issue do you have on AD DS side and whether it is fixed/addressed or not. There is no use to perform any attempts of fixing things from K2 side if AD DS issue is not addressed unless this is an issue with specific DC and there are other locally available DCs. The only possible thing is to temporarily remove connection string to some extra domain if you can afford this (and if it is a less important/additional domain which has an issue).

You may get a confirmation from AD DS support team that the they have issue with one specific DC which is failed or down for maintenance (the latter should be very rare/exceptional case of planned maintenance during business hours) and there are others locally available DCs to serve requests from K2 server. If this is the case you can try to do the following things:

1) Use AD Service SmO to check that you can query affected domain – if it works you should not have any issues in K2, if not proceed with further checks.

2) Use the following command to verify which DC is currently being used by K2 server for specific domain:

nltest /dsgetdc:DomainName

If this command returns failed DC then this is an issue with your DC locator service/AD DS infrastructure, or to put it another way problem external to K2.

In general AD DS as a technology with decades of evolution and high adoption rate is very stable and there are no well known cases where DC locator fails to switch to alternative available DC. But depending on configuration and issues of specific environments as well as implementations of application of code which interacts with AD DS there can be some cases when DC locator switching does not work properly.

3) If on the 2nd step you getting failed/unavailable DC try to use the following command:

nltest /dsgetdc:DomainName /force

This will force DC locator cache refresh and may help you to switch to another DC. Note sometimes it is necessary to run this a few times till another DC is selected.

4) If step 3 does not help you to switch to another available DC you may try to restart the netlogon service as DC locator cache is implemented as a part of this service. Here is an example of how to do it with PowerShell:

Get-Service netlogon | restart-service

nltest.exe /sc_verify:<fully.qualified.domain.name.here>

Once this is done verify whether you are switched to available DC with use of the following command:

nltest /dsgetdc:DomainName

5) If you see that after switching of DC locator to available DC K2 AD Service SmOs are still does not work consider K2 service restart/or server reboot. This is most likely could be scenario (3) when K2 workspace/base OS works well but K2 smartforms “stuck” with server down exception.

Note the only valid test here is use of AD Service SmOs to query domain – if it works then no need to do something else from K2 side. In case you see issue in the areas depending on URM User service it may simply be the case that cached data is expired and new data is still builds up. Sometimes it may be necessary to force identity cache refresh and wait till cache builds up completely (this can take very long time in large scale production environments).

Additional details and recommendations

K2 performs bind with the DirectoryEntry class e.g:

new DirectoryEntry(“LDAP://DC=Domain,DC=COM”, “”, “”,AuthenticationTypes.ReadOnly);

This process relies on Domain Controller Locator which an algorithm that runs in the context of the Net Logon service. Essentially Domain Controller Locator is a sort of AD DS client part which is responsible for selecting specific DC for specific domain. Domain Controller Locator has its own cache. The Net Logon service caches the domain controller information so that it is not necessary to repeat the discovery process for subsequent requests. Caching this information encourages the consistent use of the same domain controller and, thus, a consistent view of Active Directory.

NOTE: as you may notice in explanations for scenario 3 K2 Workspace and K2 smartforms perform bind to AD differently, at least connection string they use are different.

Refer to the Microsoft documentation for details:

Domain Controller Location Process

Domain Controller Locator

Recommendations

1) Reconfigure K2 to use GC instead of LDAP.

The global catalog is a distributed data repository that contains a searchable, partial representation of every object in every domain in a multidomain Active Directory Domain Services (AD DS) forest. So essentially your GC placed in local domain can serve part of the queries which otherwise should go to DCs in another domain, potentially over WAN link.

From purely AD DS side GC has the following benefits:

– Forest-wide searches. The global catalog provides a resource for searching an AD DS forest.

– User logon. In a forest that has more than one domain GC can be used during logon for universal group membership group enumeration (Windows 2000 native DFL or higher) and for resolving UPN name when UPN is used at logon.

– Universal Group Membership Caching: In a forest that has more than one domain, in sites that have domain users but no global catalog server, Universal Group Membership Caching can be used to enable caching of logon credentials so that the global catalog does not have to be contacted for subsequent user logons. This feature eliminates the need to retrieve universal group memberships across a WAN link from a global catalog server in a different site. Essentially you may enable this feature to make use of GC even more efficient.

To reconfigure K2 to use GC you have to edit RoleInit XML field of HostServer.SecurityLabel table and replace “LDAP://” to “GC://” with subsequent restart of K2 service.

From K2 prospective it should improve responsiveness of AD SmartObjects as well as slightly decrease reliance on WAN link/number of queries to DCs outside of local domain.

2) Try to use Domain Locator cache refresh clear up for example scenario 2 (see details above, nltest /dsgetdc:DomainName /force) and verify whether it is viable workaround. Use “nltest /dsgetdc:DomainName” to confirm which specific DC is being used by K2 server and verify status and availability of this specific DC with your infrastructure team.

3) In scenario 3 try to restart K2 service but first confirm that DC locator uses working DC.

4) There is also an existing feature request to investigate possibility to built in some DC failure detection/switching capabilities into K2 code in the future versions of the product.

facebooktwittergoogle_plusredditpinterestlinkedinmail

K2 database collation requirement – finally we have it stated in the right place

$
0
0

If you read my old blog post on Installing SQL Server instance for K2 blackpearl you probably aware that K2 database requires very specific SQL Server instance collation in case you care to be in supported state. The main problem was that this requirement has been mentioned in quite obscure place which no sane person even reach in endless quest for knowledge :) That original requirement location was quite close to that joke about ginormous EULA and vendor injecting a sentence which says: “If you really read through this EULA till this place please give us a call to claim your 1000$ reward”…

Finally K2 blackpearl compatibility matrix was updated this month to reflect this requirement and I really hope this will clarify K2 collation requirement once and for all. We all agree that at least compatibility matrix is something we all read before rushing into installation or upgrade, right? 😉

So navigate to K2 blackpearl Compatibility Matrix page > SQL Server section notes and… lo and behold this:

blackpearl collation requirement

I hope this will help people to avoid collation related issues from now on and we are all clear that:

Latin1_General_CI_AS collation is required on the SQL server instance hosting the K2 database

Facebooktwittergoogle_plusredditpinterestlinkedinmail

K2 host server eats up my RAM! :) Oh, really?

$
0
0

One of the frequent type of the issues I have to work on is high RAM usage (normally description of such problem is accompanied by phrase “with no apparent reason”) by K2 host server service. Most of the time I’m trying to create meaningful K2 community KB articles based on support cases I work on but not always everything what I want to say fits in into Click2KB format. So to discuss  “K2 host server eats up my RAM/I think I see memory leak here” issue in detail I decided to write this blog post.

Common symptom and starting point here is that you noticed abnormally high RAM usage by K2 host server service, which maybe even leads to service crash or total unresponsiveness of your K2 platform. What’s next and what the possibilities here?

Of course it is all depends on what exactly you see.

I think it is quite expected to see that immediately after server reboot K2 service memory consumption is lower than after server works for a while: Once you rebooted your server it starts clean – all threads and allocated memory is clear hence low RAM usage. But as server warms up it starts checking if it has tasks to process, and it fires other activities like users and group resolution by ADUM manager, recording data in identity cache table and so on. The more task processing threads are active the more memory is required. And keep in mind your host server treads configuration if you increased your default thread pool limits you should realize that it allows server to use more available resources on your server.

Empty (no deployed processes, no active users) K2 host server service has really tiny memory footprint:

K2 empty server with default thread pool settings

As you can see it uses less than 300 MB of RAM. And even if double default thread pool settings (and I heard that for that resources allocated upfront) memory usage stays the same at least on the box without any load.

Now we switching to interesting stuff, i.e. what could it be if RAM usage of K2 service is abnormally high?

And here comes important point: if your process design or custom code has any design flaws or hardware is poorly sized for intended workload processing queue starts growing and it may lead to resource overuse. I.e. it is not a memory leak but bottleneck caused by such things as (and I’m listing them based on probability of being cause of your issue):
1) Custom code or process design. Easy proof that this is the cause is the fact that you unable to reproduce this “memory leak” on empty platform with no running processes. It does tell you that there is no memory leak in K2 platform base code in a way.

You can refer to process design best practices as a starting point here:
http://help.k2.com/kb000352

I seen enough cases when high memory usage was caused by inefficient process design choices (something like mass uploads to DB or updating 20 MS Word documents properties in a row designed so that file is being downloaded/uploaded 20 times from SharePoint instead of doing batch update with one download/upload of a file).

Also when next time you will see this high memory usage state before doing reboot execute the following queries against K2 database:

A) Check how many process running at the same time right now and if any of them constantly stays in running state:

SELECT * FROM [K2].[Server].[ProcInst] WHERE [Status] = 1

It will give you number of running processes at specific point in time. Constantly having 20 process or more in status 1 may indicate a problem, but more importantly to execute this query multiple times with 1-2 minutes interval and see if some of the process instances with the same ID stays running constantly or for a very long time. This will be likely your “offending” process and you will want to check at which step it is so slow and so on.

B) Check for processes with abnormally high state size:

SELECT TOP 200 
ID, 
DATALENGTH(State) AS StateSize, 
Version, 
StartDate, 
Originator, 
Folio, 
Status
FROM
Server.ProcInst WITH(NOLOCK)
WHERE
Status IN (1, 2)
ORDER BY
DATALENGTH(State) DESC

This query will return processes with largest state size in bytes. If any of the processes has state size more than 1 MB this is problematic process which causes memory over use most likely due to use of looping within the process.

Just some illustrative example of what else can be wrong (and possibilities are huge here :) ): my colleague run into an issue where K2 service process memory usage suddenly started growing at ~16 GB per day rate, and in the end the reason was that every 10 seconds K2 smartactions tried to process an email which was sent to K2 service account mailbox and at is the same account under which smaractions were configured and it lead to sort of cycle and each sending attempt eat up couple of MB of memory. It was only possible to see this with full logging level and during the night where there was no other activities on the server cluttering log files.

2) Slow response/high latency of external systems or network. Depending on design of your workflows they may have dependencies on external systems (SQL, SharePoint) and it could be the case that slow response from their side causing growth of queue on K2 side with memory usage growth (sort of vicious circle or something like race condition can be in play here and it is often difficult to untangle this and isolate root cause).

In such scenario it is better to:

A) At the time of an issue verify K2 host server logs and ADUM logs to see if there are any time outs or communication type of errors/exceptions.
B) Check all servers which comprise your environment (K2, SQL, SharePoint, IIS) and watch out for resource usage spikes and errors in Event Viewer (leverage “Administrative Events” view). K2 relies heavily on SQL where K2 DB is hosted and if it is undersized or overloaded (scheduled execution of some SSIS packages, scheduled antivirus scan or backup) and if it is slow to respond you may see memory usage growth/slowness on K2 server side.
If your servers virtualized confirm your K2 vServers placement with virtualization platform admins – K2 and K2 DB SQL instance should not coexist on the same vHost with I/O intensive apps (especially Exchange, SharePoint).

You should pay special attention to ADUM logs – if there are loads of errors those have to be addressed as K2 server may constantly waste resources on futile attempts to resolve some no longer existing SharePoint group provider (site collection deleted, but group provider still in K2) or resolving objects from non working domain (failed connectivity or trusts config). These resolution attempts eat up resources and may prevent ADUM from timely refreshing things which are needed for running processes by this making situation worse (growing queue).

IMPORTANT NOTE: It never woks in large organizations if you just ask your colleagues (SQL admins/virtualization admins) whether all is OK on their side – you will always get response that all is OK :) You have to ask specific questions and get explicit confirmation of things like VM placement, whether your K2 DB SQL instance is shared with any other I/O intensive apps. You want to have a list and go through it eliminating possibilities.
I personally worked with one client who spent months troubleshooting performance and reviewing their K2 solutions inside out and searching for a leak while it was solved in the end by moving K2 DB to a dedicated SQL server instance, and in a hindsight they realized that previously K2 DB coexisted with some obscure integration DB not heavily used but it had a SSIS package which was firing twice a day and maxed out SQL resources for couple of hours causing prolonged and different disruptions to their K2 system. Checking SQL was suggested from the very beginning and answer was we don’t have issues on SQL side, even after they asked twice their SQL admins.

3) Inadequate hardware sizing. To get an idea about how to size your K2 server you can look at this table:

Scale out

This may look a bit controversial to you but this table is from Performance and Capacity Planning document from K2 COE document and it illustrates how you have to scale out based on total number of users and number of concurrent user with base configuration of 1 server with 8GB of RAM. Depending on your current hardware configuration this may or may not support your idea of scaling up.

Also see these documents on sizing and performance:
http://help.k2.com/kb000589#
http://help.k2.com/kb000401#
K2 blackpearl Performance Testing

Also see this K2 community KB:
http://community.k2.com/t5/tkb/articleprintpage/tkb-id/TKB_blackpearl/article-id/610

4) Memory leak. This is rather unlikely as K2 code (like code of any other mature commercial software) goes through strict QA and testing, and, personally, I saw not more than 3 cases where there was memory leak type of an issue which had to be fixed in K2 – it was all in old versions and in very specific, not frequent scenarios.

If what you observe is not prolonged memory usage spikes which not going away by themselves, but your K2 service just at times maxing out resource usage but then all goes back to normal with no intervention from your side (such as K2 service/server restart) then it looks like insufficient hardware type of situation (though other issues I mentioned previously still may have influence here). Memory leak is rather imply that you need to stop service or something like this to resolve it.

If after checking all the points mentioned above you still suspect that there could be some memory leak I would recommend you to place K2 support case and prepare all K2 logs along with memory dumps collected in the low and high memory usage states (you can obtain instruction on collecting memory dumps from K2 support).

Facebooktwittergoogle_plusredditpinterestlinkedinmail

K2 blackpearl Installation: Configuring MSDTC properties – Part 2

$
0
0

This post is an addition to my older post about configuring MSDTC in K2 environment and it has been triggered by the following error:

K2 Designer error - Partner Transaction manager disabled support for remote transactions

So basically I had freshly installed K2 4.6.6 environment (don’t ask me why I’m using such an old version 🙂 ) and it was first deployment of a simplistic workflow which gave me this error.

And if error message text which says: “The partner transaction manager has disabled its support for remote/network transactions. (Exception from HRESULT: 0x8004D025)” doesn’t tell you that something wrong with your MSDTC config then quick google search will confirm this to you.

The thing is that all you need to know is indeed covered by K2 documentation, but problem of any software documentation is that somewhat like a good dictionary its creation is driven by certain standards making it perfect for specific look ups for information but at the same time deter reader from reading it end to end, and by contrast with dictionaries software documentation does not have super simple data organization facilitating quick and precise look ups. So I mean rarely people do read specific sections of it unless they driven by specific error to specific page 🙂

To recap MSDTC side requirements for K2: You need to have it configured on all K2 servers and SQL servers used by K2 (have clusters? do config it on all nodes). As you have seen in my previous blog post it boils down to setting number of check boxes on Security tab of Local DTC properties which is reachable through the following commands: dcomcngf or comexp.msc (still keep forgetting these 🙂 ).

It is worth noting that K2 Setup Manager is capable to set these properties on K2 servers, but you have to go to SQL and set the same settings too. This was first correction I made in my environment after seeing this error. But it was enough. Looking a little bit more into K2 documentation I noticed this:

MSDTC Firewall config 1

I actually decided to this via GUI on SQL server, and what you need to do is to enable all 3 rules from MSDTC group:

MSDTC Firewall config 2

And you have to enable this on all K2 servers and SQL servers. Trust me, I tried to enable this on SQL servers only first 🙂 The same error persist till you enable it both on K2 and SQL servers.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Configuring K2 NLB cluster – Part 1

$
0
0

I’ve just recorded YouTube video on how to configure Windows NLB for K2 NLB cluster:

Please bear with uninspiring introduction where I’m clumsily trying to explain what is DNS round robin and excuse my overuse of interjection “so” which I noticed only after review of my recording – I will try to improve my presentation skills in the future 🙂 For now it is all down to “live demo” pressure 🙂

The one thing I didn’t touch on in this video is an Extended Affinity. Actually as soon as you configure timeout value available for Single or Network affinity in Multiple host filtering mode you start using Extended Affinity feature which was introduced in Windows Server 2008 R2.

Windows NLB Extended Affinity

Unfortunately I’m not aware about official K2 recommendations for K2 in terms of Extended Affinity (K2 documentation features screenshots from some old Windows Server version it seems) but it seems it is something you may want to leverage for K2 Workspace/SF/SP.

Also in video I was a bit imprecise in selecting Both protocols in Port rules as based on official documentation you only need TCP and your ports setup should look like this:

K2 NLB Port Rules

Configuration of port rules on the screenshot above assumes that both K2 blackpearl (K2 host server service) and K2 workspace are hosted on the same cluster.

Also I should note that, unfortunately I was not able to make Unicast mode work in VMware Workstation based environments as it is not as simple as just adding extra NIC but for testing purposes it may be sufficient to use Multicast. For production deployments you either use Multicast or if your network equipment allows IGMP Multicast for small/medium size environments. For large environments MSFT itself recommends to use more advancing load balancers (one of the most popular today are those from F5, and there are a lot of K2 deployments where F5 ADCs are being used).

Just for clarity I will also quote an old note from windowsitpro.com (from 2006 🙂 ) which clarifies this two NICs requirement for Unicast NLB quite neatly:

Unlike Microsoft Cluster service clusters, in which you should have separate NIC’s to separate regular traffic from the cluster heartbeat traffic, NLB members don’t need multiple NIC’s. However, many people still recommend two NICs in NLB servers, given the low cost of quality NIC’s. Additionally, multiple network cards are desirable in the following situations:

  • For inter-host communication between NLB cluster members when operating in uni-cast mode. With only one NIC NLB members are unable to communicate directly with each other.

  • If the NLB members connect to back end services, for example a Microsoft SQL Server database, it might be desirable to use separate NICs to separate the front and back end traffic.

You may also see the following error whenever you try to run NLB console directly from one of your NLB hosts:

NLB Error When Console Run from NLB host

This is known issue and you can safely ignore it. Just run NLB management console from your management workstation and you will not receive any errors then.

Links to related official K2 documentation:

(1) K2 blackpearl Installation and Configuration Guide > Prerequisites > Set up NLB

Takeaways from this document:

“For a K2 Host Server cluster, use a Unicast operation mode and set the affinity to None. Since the K2 Host Server is a stateless machine, no affinity is necessary per session.”

“For a K2 Workspace Server cluster, use a Unicast operation mode and set the affinity to Single. You will want to ensure that the web pages retain an affinity to the web server during the session.”

“For a K2 for SharePoint Server cluster, use a Unicast operation mode and set the affinity to Single. You will want to ensure that the web pages retain an affinity to the web server during the session.

The same is true for all server clusters that host web based components (such as Process Portals, web services, web parts).”

“As mentioned in the Network Load Balancing Setup and Configuration topic, at least two network adaptors are required when the Unicast operation mode is selected.

Set up the NLB configuration to allow traffic through on the K2 Workflow (default of 5252) and K2 Hostserver (default of 5555) ports.”

(2) K2 blackpearl Installation and Configuration Guide > Planning Guide > Additional Planning Considerations > Network Load Balancing Setup and Configuration

Main takeaway here is the following:

“Traffic to and from a SharePoint site or the K2 Workspace involves a considerable amount of communication from the Web servers to the back-end servers running SQL Server; good connectivity between them is required. It is therefore recommended that Web servers be dual-homed:

  • One network adapter handling the incoming Web requests by using NLB

  • One network adapter acting as a normal server adapter to communicate to the server running SQL Server along with the other servers within the infrastructure, such as domain controllers for authentication purposes”

(3) K2 SmartForms – Setting up NLB

(4) K2 and Firewalls

(5) Seemingly random 401 errors in load balanced SharePoint, Workspace, SSRS and K2 server environments

(6) F5 DevCentral – Load Balancing K2 Blackpearl

 

Facebooktwittergoogle_plusredditpinterestlinkedinmail

“Value cannot be null. Parameter name: token” for K2 links on SP2013 site in SP2010 compatibility mode

$
0
0

When you migrate Sharepoint and K2 you may run into a problem where all K2 links for your site just give you “Value cannot be null.  Parameter name: token.” I end up having issue after just changing site collection compatibility range (for testing purposes, here is how to do it) and then creating new site within it in SP 2010 compatibility mode. Immediately it was possible to see that something is wrong/missing:

K2 - Value cannot be null for SP 2010 Compatibilty mode site

Here is how to fix this. What you need to do is configure classic windows claims to work with K2 from SharePoint Claims enabled site with K2 (see my earlier blog post on how to configure claims authentication for your site also there is related K2 help section). To configure classic windows claims for K2 you have to do the following:

1) Retrieve SigningCertificateThumbprint by issuing the following command in SharePoint 2013 Management shell:

(Get-SPServiceApplication -Name SecurityTokenServiceApplication).SigningCertificateThumbprint 

Copy returned value to use it on step 2.

2) Open SQL Server Management Studio and edit SQL script below by replacing value “CAEF8EEA3D68074C347AC9584E60C6FC406C8AAB” with one retrieved on step (1) in your environment.

DECLARE @issuerId INT

INSERT INTO [K2].[Identity].[ClaimIssuer] (Name,Description, Issuer, Thumbprint, Uri, UseForLogin)

VALUES ('SharePoint Windows STS', 'SharePoint Windows Authentication', 'SharePoint', 'CAEF8EEA3D68074C347AC9584E60C6FC406C8AAB',null, 0)

SET @issuerId = SCOPE_IDENTITY();



UPDATE [K2].[Identity].[ClaimTypeMapping]

SET IssuerID=@issuerId

WHERE ID=2



UPDATE [K2].[Identity].[ClaimTypeMap]

 SET ClaimType = 'http://schemas.microsoft.com/sharepoint/2009/08/claims/userlogonname'

WHERE ID=3



INSERT INTO [K2].[Identity].[ClaimTypeMap] (ClaimTypeMappingID, ClaimMappingType, OriginalIssuer, ClaimType, ClaimValue)

VALUES (2,'IdentityProviderClaim', 'SecurityTokenService', 'http://schemas.microsoft.com/sharepoint/2009/08/claims/identityprovider', 'windows')

3) Next you can check Identity.ClaimsIssuer table in K2 database:

SELECT * FROM [identity].ClaimIssuer

It should contain SharePoint Windows STS with appropriate thumbrint value:

K2 - Value cannot be null for SP 2010 Compatibilty mode site - Issuers

Once this is done that “Value cannot be null.  Parameter name: token” should gone. It seems that in my case thumbprint value changed in my environment (i.e. I’m pretty sure that entry for SharePoint Windows STS did exist in my ClaimIssuer table) though I’m not sure what triggered that change.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

K2 Community Articles

$
0
0

Since K2 Community Articles were introduced one year or so ago this channel allowed to bring a lot of great content to K2 community site. Of course quality varies across the board for these articles but bottom line is that K2 community benefit from quickly available, relevant information on real world K2 issues. I see a lot of folks solving their problems without logging support ticket or discover relevant information at the early stage of investigation of their issues, often without any help of K2 support engineers.

I authored some of these articles and edited others, and as I found it difficult sometimes to locate one or another K2 community article I worked on I decided to list all of these articles here. And I think I also list links to some really good articles authored by other people.

Good entry point to check out latest Community Aricles on K2 community site can be this page, where you can see such things as popular threads in the K2 Community, latest community articles as well as most kudo’d authors and articles.

In case you see any mistakes (technical or just typos/grammar 🙂 or have any questions about those articles feel free to let me know about them via comments under this post.

Currently I just listing articles in no particular order but I maybe categorize/rearrange them at some later point.

K2 blackpearl service high RAM usage

K2 Host Service CPU usages close to 100% 

Thread pool locking issues when using K2 Client API inside of workflow

Unresponsive K2 Workspace – Server run out of worker threads

IPC Event processing delays

Workflow permissions not working correctly when configured via group

Analysis fails after upgrading from 4.6.x to 4.6.8: Constrained delegation is not enabled for the Active Directory account

Initialization failed before PreInit – Unable to establish a secure connection with the Active Directory server

Configuring Kerberos for K2 environment

How to reduce the size of K2 database on development machine

4.6.9 upgrade wipes out serverlog tables

PDF convertor generates an empty form when multiple security providers configured

How to increase default file size limit for File Attachment Control in K2 SmartForms

64007 Provider did not return a result for K2:Domain\User on GetUser 

Facebooktwittergoogle_plusredditpinterestlinkedinmail

How to create self-signed certificate for K2 NLB cluster add it to trusted root CA on client machines via GPO

$
0
0

I’ve recently recorded a video covering this topic, but I think it also makes sense to write a bit here, if only for giving you ability to copy paste related commands 🙂

When you install K2 blackpearl NLB cluster K2 Setup Manager can create K2 sites for you and it also creates HTTPS bindings for it. But K2 Setup Manager create individual self-signed certificates for each of the NLB cluster nodes which leads to ugly certificate security warning whenever you try to access K2 Workspace or any other K2 site.

To address this you have to do the following:

1) Create new self-signed certificate for your K2 NLB cluster name using New-Self signed certificate cmdlet:

New-SelfSignedCertificate -DnsName <server dns names> -CertStoreLocation cert:Localmachine\My

You have to do this on one of your K2 servers. This cmdlet will create new self-signed certificate and place it to Personal certificates store of your server. Copy certificate hash from output of this command – you will need it for next steps.

2) Next you want to obtain appid of your current K2 HTTPS app/binding using the following command (use elevated CMD for this):

netsh http show sslcert ipport=0.0.0.0:443

Copy appid from the output to use it in step 3.

3) “Delete”/un-assign current SSL certificate from your HTTPS binding (one which was assigned by K2 Setup Manager):

netsh http delete sslcert ipport=0.0.0.0:443

Insert your certificate thumbprint copied on step (1) and appid obtained on step (2) into the following command and execute it from elevated command prompt:

netsh http add sslcert ipport=0.0.0.0:443 certhash=<Cert thumbprint> appid={%YOUR  APP ID from step (1)%} certstorename=MY

At this point we created self-signed certificate and assigned it to HTTPS binding for K2 on our first server. But we still going to get certificate warning because our certificate is self-signed and not trusted. To address this it is necessary to import it into Trusted Root Certification Authorities on all machines which we will be using to access K2 sites.

4) At this step we will export certificate into P7B file to further import it into Trusted Root Certification Authorities. Execute the following in PowerShell:

$cert = Get-ChildItem –Path cert:\LocalMachine\My\<thumbprint>

Export-Certificate –Cert $cert –Filepath c:\servercert.p7b –Type P7B

This will create “servercert.p7b” file in the root of C drive. For testing purposed you can add it into Trusted Root Certification Authorities manually on your K2 server – right-click on it, select Install Certificate > Next >  Place all certificates in the following store > Browse > Trusted Root Certification Authorities > OK > Next > Finish.

At this point you should be able to access K2 Workspace via NLB name from your 1st K2 server assuming all above listed steps were performed on it and you not hit second node of your K2 NLB cluster by chance. To exclude the latter, you can take this node off-line or Stop in NLB Cluster Manager:

K2 NLB Stop Node

5) Now we can just deploy our P7B certificate file to Trusted Root Certification Authorities on all machines in our domain using GPO certificate deployment option (Computer Configuration\Windows Settings\Security Settings\Public Key Policies\Trusted Root Certification Authorities):

K2 NLB Import Certificate GPO

Once you created this GPO and linked it to appropriate OU (one which contains machines from which you accessing K2 sites), you can update your local group policies on your client machines and access K2 sites via NLB name using HTTPS without any certificate related warnings.

6) Final touch 🙂 We need to add certificate created on step (1) to the second K2 server and configure it for use for K2 HTTPS binding on this second server. P7B file we created earlier does not fit for this purpose and we need export certificate once again including private key this time.

Run MMC on K2 server one and add Certificates snap-in targeting Computer Certificates store:

K2 NLB Open Computer Cert Store

Locate your K2 NLB cluster certificate created on step (1) and export it including private key:

K2 NLB Export Certificate

Make sure you select “Export Private Key”, specify password on certificate and in the end you should get PFX file. Copy this PFX file to your second server and install it to personal certificates store for this machine, then use IIS console and select this certificate to be used for K2 sites HTTPS binding.

That’s it – you created self-signed certificate for K2 NLB cluster name, configure it to be used on all your nodes and added it to the Trusted Root Certification Authorities on all your machines via GPO.

Here is the video which walk you through all these steps:

Facebooktwittergoogle_plusredditpinterestlinkedinmail

How to: Drop multiple databases via SQL Script (no worries backup/restore is covered too :) )

$
0
0

Recently I did rather a lot of test requiring me to work with non-consolidated K2 DBs. Test included multiple DB restore/delete operations and I realized that I need some script to quickly drop all my K2 DBs and start from scratch. Here is this script:

USE master;
GO
SELECT 'ALTER DATABASE ' + name + ' SET SINGLE_USER WITH ROLLBACK IMMEDIATE ' + 'DROP DATABASE '+ name 
FROM sys.databases WHERE name like 'K2%';
GO

Script selects every database prefixes with “K2” and you just need to copy its output into new query window and execute.

And in case you tend to backup things before you delete them, similar script for backup:

USE master;
GO
SELECT 'BACKUP DATABASE ' + name + ' TO DISK=''C:\DBs\' + name + '.bak' + Char(39) 
FROM sys.databases WHERE name like 'K2%';
GO

And for restore you can use the script below. Unfortunately it uses hard coded file paths but assuming your back up files have default DB names (and for example were created by the script above) you can get away with minimum find and replace adjustments (path to backup files and your SQL instance data directories may need to be adjusted). Here is the script for restore:

USE MASTER

RESTORE DATABASE [K2Categories]
FROM DISK = 'c:\DBs\K2Categories.bak'
WITH MOVE 'K2Categories' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2Categories.mdf',
MOVE 'K2Categories_Log' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2Categories.log',
REPLACE;
GO

RESTORE DATABASE [K2Dependencies]
FROM DISK = 'c:\DBs\K2Dependencies.bak'
WITH MOVE 'K2Dependencies' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2Dependencies.mdf',
MOVE 'K2Dependencies_Log' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2Dependencies.log',
REPLACE;
GO

RESTORE DATABASE [K2EnvironmentSettings]
FROM DISK = 'c:\DBs\K2EnvironmentSettings.bak'
WITH MOVE 'K2EnvironmentSettings' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2EnvironmentSettings.mdf',
MOVE 'K2EnvironmentSettings_Log' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2EnvironmentSettings.log',
REPLACE;
GO

RESTORE DATABASE [K2EventBus]
FROM DISK = 'c:\DBs\K2EventBus.bak'
WITH MOVE 'K2EventBus' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2EventBus.mdf',
MOVE 'K2EventBus_Log' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2EventBus.log',
REPLACE;
GO

RESTORE DATABASE [K2EventBusScheduler]
FROM DISK = 'c:\DBs\K2EventBusScheduler.bak'
WITH MOVE 'K2EventBusScheduler' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2EventBusScheduler.mdf',
MOVE 'K2EventBusScheduler_Log' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2EventBusScheduler.log',
REPLACE;
GO

RESTORE DATABASE [K2HostServer]
FROM DISK = 'c:\DBs\K2HostServer.bak'
WITH MOVE 'K2HostServer' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2HostServer.mdf',
MOVE 'K2HostServer_Log' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2HostServer.log',
REPLACE;
GO

RESTORE DATABASE [K2Server]
FROM DISK = 'c:\DBs\K2Server.bak'
WITH MOVE 'K2Server' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2Server.mdf',
MOVE 'K2Server_Log' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2Server.log',
REPLACE;
GO

RESTORE DATABASE [K2ServerLog]
FROM DISK = 'c:\DBs\K2ServerLog.bak'
WITH MOVE 'K2ServerLog' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2ServerLog.mdf',
MOVE 'K2ServerLog_Log' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2ServerLog.log',
REPLACE;
GO

RESTORE DATABASE [K2SQLUM]
FROM DISK = 'c:\DBs\K2SQLUM.bak'
WITH MOVE 'K2SQLUM' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2SQLUM.mdf',
MOVE 'K2SQLUM_Log' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2SQLUM.log',
REPLACE;
GO

RESTORE DATABASE [K2WebDesigner]
FROM DISK = 'c:\DBs\K2WebDesigner.bak'
WITH MOVE 'K2WebDesigner' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2WebDesigner.mdf',
MOVE 'K2WebDesigner_Log' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2WebDesigner.log',
REPLACE;
GO

RESTORE DATABASE [K2SmartBox]
FROM DISK = 'c:\DBs\K2SmartBox.bak'
WITH MOVE 'K2SmartBox' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2SmartBox.mdf',
MOVE 'K2SmartBox_Log' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2SmartBox.log',
REPLACE;
GO

RESTORE DATABASE [K2SmartBroker]
FROM DISK = 'c:\DBs\K2SmartBroker.bak'
WITH MOVE 'K2SmartBroker' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2SmartBroker.mdf',
MOVE 'K2SmartBroker_Log' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2SmartBroker.log',
REPLACE;
GO

RESTORE DATABASE [K2WebWorkflow]
FROM DISK = 'c:\DBs\K2WebWorkflow.bak'
WITH MOVE 'K2WebWorkflow' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2WebWorkflow.mdf',
MOVE 'K2WebWorkflow_Log' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2WebWorkflow.log',
REPLACE;
GO

RESTORE DATABASE [K2Workspace]
FROM DISK = 'c:\DBs\K2Workspace.bak'
WITH MOVE 'K2Workspace' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2Workspace.mdf',
MOVE 'K2Workspace_Log' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLK2\MSSQL\DATA\K2Workspace.log',
REPLACE;
GO

Facebooktwittergoogle_plusredditpinterestlinkedinmail

K2 blackpearl installation – complete removal/clean up

$
0
0

Recently I did a lot of test installs of K2 blackpearl reusing the same machines. I.e. it was necessary for me to remove everything related to K2 blackpearl before I can install it again on the same server. Below you may find few notes/observations with regards to this.

In order to remove K2 blackpearl you just run K2 blackpearl Setup Manager on your server and select “Remove K2 blackpearl”:

K2 blackpearl Setup Manager - Remove K2 blackpearl

This will remove all K2 components from your server and will ask you for reboot. Once this is done the following things still have to be removed if your goal is to clean up everything and start from scratch:

1) Some files may still remain in the following folders:

%ProgramFiles(x86)%\K2 blackpearl

%ProgramData%\SourceCode

%UserProfile%\AppData\Local\SourceCode_Technology_Hol

If your goal is full clean up you can remove all these folders given the fact that you uninstalled all your K2 components via Setup Manager before hand and there is no K2 components listed in Programs and Features list (appwiz.cpl). NOTE: If you have SmartForms or other additional components you do uninstall in reverse order – last component installed being removed first and so on.

2) Self-signed certificates for K2 server and sites not being removed from Personal Machine store on your K2 server. If your goal is full clean up you may want to remove them too.

3) K2 database not being deleted too, for complete clean up you should drop/remove it on SQL server.

4) I also noticed that in my case K2WTS service has not been removed correctly by Setup Manager during removal process. K2WTS service also known under display name “K2 Claims To Windows Token Service.” Here is an example how to check if it is still present after removal of K2 blackpearl via PowerShell:

Get-WmiObject -Class Win32_Service -Filter "Name='K2WTS'"

Below sample output in case service is still present:

Get-WmiObject K2WTS

No output means that no service with such name found.

And this is how to remove via PowerShell:

Get-WmiObject -Class Win32_Service -Filter "Name='K2WTS'" | Remove-WmiObject

Of course there are other ways to remove service in Windows as Remove-WmiObject available only in PS 3.0 or newer. You can also use sc.exe or even locate and delete relevant entry in registry using regedit.exe

Also only after writing this blog post I accidentally discovered relevant section in official K2 documentation which covers this topic: K2 blackpearl Installation and Configuration Guide  > Maintenance > Remove > Manual Environment Clean Up

Facebooktwittergoogle_plusredditpinterestlinkedinmail

24404 Authentication with server failed when connecting from WorkflowManagmentServer to the WorkflowClient

$
0
0

Recently I had an interesting support case where we spent way too much time on investigation of a problem which was simple one as soon as we figure it out 🙂 It was the typical case where it was difficult to see the forest behind the trees as K2 environment we dealt with was quite complex and involved F5 NLBs so it was easy to be distracted by all this complexity and blame issue  environment configuration problems.

Anyhow main symptom was that custom application built on top of K2 platform which worked just fine in K2 4.6.6 started to fail immediately after environment upgrade to 4.6.11. Specifically application started to throw the following exceptions when calling ReleaseWorklistItem method:

“2025 Error Marshalling SourceCode.Workflow.Runtime.Management.WorkflowManagementHostServer.ReleaseWorklistItem, 24404 Authentication with server failed for %K2_Service_Account% with Message: AcceptSecurityContext failed: The logon attempt failed

We were a bit distracted in the beginning by NLBs and environment complexity (which I should admit was designed and managed remarkably well) but in the end root cause was isolated to the way K2 connection string was configured. Let’s assume app connection string is configured as follows:

<appSettings>
<!-- K2 Workflow management API keys -->
<add key="K2HostWFManagement" value="k2.denallix.com"/>
<add key="K2HostPortWFManagement" value="5555"/>
<add key="SystemUserWFManagement" value="Denallix\k2service"/>
<add key="SystemUserPasswordWFManagement" value="Password"/>
<add key="UseAutheniticateWFManagement" value="True"/>
<add key="UseEncryptedPasswordWFManagement" value="False"/>
<add key="UseIntegratedWFManagement" value="True"/>
<add key="UseIsPrimaryLoginWFManagement" value="True"/>
<add key="SecurityLabelNameWFManagement" value="K2"/>
<add key="WindowsDomainWFManagement" value="denallix"/>
</appSettings>

So this connection string if we look at it does not seem to be correct as if you look at it carefully you may notice that we indicate use of integrated authentication for WF Management but at the same time provide explicit credentials. And indeed as soon as we remove credentials or set UseIntegratedWFManagement to false app starts working in 4.6.11. But now the thing is that such connection string works just fine in K2 4.6.6 – 4.6.10 but does not work in 4.6.11. So it looks a bit like breaking change which in reality is a fix implemented in 4.6.11 which changed system behavior.

Prior to 4.6.11 when you Authenticate a HostServer session with the following connection string:

Integrated=True;IsPrimaryLogin=True;Authenticate=True;EncryptedPassword=False;Host=k2.denallix.com;Port=5555;UserID=Denallix\Administrator;Password=Password!;WindowsDomain=denallix;SecurityLabelName=K2

the connection string associated with the session was:

Integrated=True;IsPrimaryLogin=True;Authenticate=True;EncryptedPassword=False;Host=dlx;Port=5555;UserID=DENALLIX\Administrator;Password=Password!;AuthData=Denallix;SecurityLabelName=K2

If you pay attention to the end of sample connection strings above the WindowsDomain key wasn’t persisted pre-SSO and instead it was added as AuthData.

When you open a connection from the WorkflowManagmentServer to the WorkflowClient, there is a check to see if the connection has a WindowsDomain, Username and Password. If it had all 3 of them, it would try and use those details to authenticate a user. In versions prior to 4.6.11 K2 didn’t persist the WindowsDomain property, and because of that even if you specified all three parameters it would just do a normal integrated connection string without the username and password as WindowsDomain is “missing”.

In 4.6.11 K2 persists the WindowsDomain, so with connection string properties configured as above K2 actually tries to authenticate with the following values:

WindowsDomain = "Denallix"
UserID = "Denallix\Administrator"
Password = "Password"

This works in HostServer because there K2 checks if we specified WindowsDomain AND if there is a domain specified in the UserID, but there is no such check in WorkflowServer. This leads to connection attempt with values from both WindowsDomain + UserID which leads to use of something like “Domain\Domain\User” for authentication and authentication attempt will fail because of that.

Workaround to this is to not specify the WindowsDomain in the connectionstring if it is already included in the UserID
OR to not specify the domain with the userName.

e. g.

Integrated=True;IsPrimaryLogin=True;Authenticate=True;EncryptedPassword=False;Host=k2.denallix.com;Port=5555;UserID=Denallix\Administrator;Password=Password!;SecurityLabelName=K2

or

Integrated=True;IsPrimaryLogin=True;Authenticate=True;EncryptedPassword=False;Host=k2.denallix.com;Port=5555;UserID=Administrator;Password=Password!;WindowsDomain=denallix;SecurityLabelName=K2

This is something to be aware of if you using connection strings and your app connects to WorkflowServer, otherwise you can have sort of little surprise after upgrading to 4.6.11 from older versions.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Initialization failed before PreInit: Unable to establish a secure connection with the Active Directory server

$
0
0

The other day I had a support case where temporary outage of AD DS infrastructure caused K2 workspace to enter into the error state where it started throwing the following error:

“An error has occurred.
Please contact your administrator.
Error:
Initialization failed before PreInit: Unable to establish a secure connection with the Active Directory server.
Possible causes
– the ADConnectionString in the K2 Workspace web.config may have an incorrect LDAP path.
– the physical connection to the Active Directory Server might be down.
– please review log files for more information.”

Just for lazy readers and those in a hurry: Bump into error above? Try Recycle Application Pool which runs your K2 Workspace (default application pool name is “K2”).

The tricky thing here that it is really easy to miss short period of AD outage and start “fixing” K2 instead. But in case this is an environment which used to work and you are sure  that no changes were made in K2 configuration recently then it is just an issue caused by AD DS outage.

When K2 Workspace is loaded it attempts to establish the connection with AD as the application pool account. If there is an issue with accessing AD under this account it leads to above mentioned error. What can be wrong with this account? It can be disabled or locked out in AD but also after AD DS outage it may be necessary to perform K2 workspace application pool restart to force it to reconnect to AD DS. Now the interesting thing here is that a lot of people trying to use a big hammer immediately, i.e. iisreset and it may not fix this issue sometimes (according to my experience) leaving you wondering why IIS reset does not fix this, where as just K2 Workspace application pool restart does.

In attempt to remove any confusion you mat want to read up a bit on iisreset VS Recycling – and good explanation of this can be found here. Your main take away from that post should be understanding of IIS architecture and its 3 main components:

IIS Architecture 2

Image source – IIS7 For Non IIS PFEs

Three components are the following:

  1. HTTP.SYS (runs in Kernel Mode).  This component responsible for client connection management, routing requests from browsers, and managing response cache.
  2. Worker Processes (run in User Mode). If you look at the picture above that we may also have so called Web Garden which is nothing more than application pool which allowed to use more than one worker processes by means of setting “Maximum number of worker processes” to a value higher than 1. Web garden feature has been designed for one purpose which is “Offering applications that are not CPU-bound but execute long running requests the ability to scale and not use up all threads available in the worker process.” Leaving out Web Gardens each Application Pool has one specific worker process within which it is running (W3wp.exe). Worker process handles all the contents (aka static contents), such as HTML/GIF/JPG files, and runs dynamic contents, such as ASP/ASP.NET applications. Therefore, the status of W3WP process (=Application Pool) is critical for the performance and stability of web applications, or web sites.
  3. IIS Admin Services (run in User Mode). Prior to IIS7 there used to be IISADMIN service which used to host the IIS 6.0 configuration compatibility component (metabase). The metabase is required to run IIS 6.0 administrative scripts, SMTP, and FTP. Starting from IIS7 we have Windows Process Activation Service (WAS) which manages application pool configuration and worker processes instead of the WWW Service. This enables you to use the same configuration and process model for HTTP and non-HTTP sites.

OK it seems I went in too much of details here and now have to get back to main topic here: main thing for you to know is what actually happens when you execute iisreset. It actually restart IIS services (all of them) and for most of us this is exactly what we expect and this is what may make you wonder about why IIS reset does not fix an issue, where specific application pool restart does it? Sounds strange…

I would venture to suggest that iisreset may fail to restart some of specific w3wp processes sometimes but after spending couple of hours searching through the web and doing couple of quick tests this does not seem to be the case. But what I can say based on above mentioned article you should actually prefer Application Pool recycle anyway.

On a side note I would also be aware of the following iisreset keys:

iisreset /status

Output of this will look as follows:

iisreset-status

It gives you current status of all IIS services as well as what exactly will be restarted by iisreset.

iisreset /noforce

This parameter prevents the server from forcefully stop worker processes process. This can cause IIS to reset slower but is more graceful. With this parameter it is a compromise between lowering downtime and trying to be less disruptive to what is already running.

And just to confirm iisereset executed without any keys is the same iisreset /restart

Getting back to K2 Workspace issue mentined in the very beginning of this article my advice is try to Recycle your K2 workspace application pool – it is preferable and less disruptive action than iisreset. When you recycle an application pool, IIS will create a new process (keeping the old one) to serve requests. Then it tries to move all requests on the new process. This is known as “overlapped recycling” as opposed to “process recycling” and it is default behavior for all IIS application pools.

In case it did not help you to resolve “Initialization failed before PreInit: Unable to establish a secure connection with the Active Directory server” error in K2 Workspace below are some K2 side checks to do. Make sure that:

  1. K2 Workspace site is running in IIS Manager (not Stopped)
  2. Application Pool designated to run this site and applications therein are running as well. If they are not running, the service account running the K2 Workspace application pool may be locked in Active Directory.
  3. Make sure the Workspace Application Pool account has at least read access in AD for the newly added domain (in case you added any) or in one which you always had. When Workspace is loaded it attempts to establish the connection with AD as an application pool account.
  4. Try including the domain controller name and LDAP port number in the LDAP connection string as follows:
    <add name="ADConnectionString2" connectionString="LDAP://[DomainControllerName]:[port]/MyDomain.com" />

    OR
    <add name="ADConnectionString2" connectionString="LDAP://[DomainControllerName]/MyDomain.com" />
  5. If you continue to get the same error you may try using the Distinguished name format for the domain instead, for example:
    <add name="ADConnectionString2" connectionString="LDAP://[DomainControllerName]/DC=MyDomain,DC=com" />

If after checking all these things issue still persist consider enabling TracingPath in the Workspace web.config, to get a more detailed debug output from the PreInit error.

 

Facebooktwittergoogle_plusredditpinterestlinkedinmail

K2 for retail automation

$
0
0

When your work is focused on specific product and services around it (does not matter if you in development, support or sales team of product centered organization) the most rewarding thing is to see real-world examples of how your product is applied in practice by clients. It is even better when it was implemented in such a way that client does not mind to share their implementation story with wider public in a video format. Really good to see such examples of how K2 really works for business.

Fozzy Group was able to built K2 based portal automating such things as contracts management, specification management, supply schedule management, sales forecast and score card just in one year. I don’t think you can see such BPA go-live dynamics with conventional code-heavy custom development as well as with some major (semi-)specialized products which end up being adjusted/customized for years (incurring high consultancy fees in the process) before business is able to go-live with them.

Amazing example from retail area which to my mind one of the activities where automation can bring great and measurable benefits. IMO most of the retailers still underutilize technology to its highest potential, but I hope we will see some changes as time goes by.

 

Facebooktwittergoogle_plusredditpinterestlinkedinmail

How to: fix User Profile Sync Connection Task Warning

$
0
0

Quite often doing K2 installation you may run into the “User Profile Sync Connection Task Warning” warning during Configuration Analysis stage:

User Profile Sync Connection Task Warning

Warning itself contains a lot of information as well as links to extra details, but it keep confusing people in need of quick fix for this warning 🙂 All you need to do is (as usual) detailed in official K2 documentation but for some incomprehensible reason it is very difficult for all of us to find relevant nugget in documentation, let alone read it all beforehand 🙂 But it is there really:

User Profile Sync Connection Task Documentation

OK, enough referring to documentation and preaching about virtues of reading it. Here is what you need to do to fix this warning. The install user has to have permissions on the UPS. If you are installing as Administrator, for example, then check if Administrator has permissions on the Administrators and Permissions tabs for the UPS service. Below you may find a bit of visual aid which hopefully let you not to get lost in the thicket of SharePoint UI.

User Profile Sync Connection Task Warning Fix Grant Permissions

Sorry for absence of Halloween themed pics in the post at such a day, but I hope that information here may be useful to someone  (giving amount of times I heard questions about this warning, it definitely should be useful).

Facebooktwittergoogle_plusredditpinterestlinkedinmail
Viewing all 93 articles
Browse latest View live