Quantcast
Channel: TCSMUG - Twin Cities Systems Management User Group - Sherry Kissinger
Viewing all 45 articles
Browse latest View live

Example of Custom SQL Job to log to Application Event Viewer for ConfigMgr

$
0
0

This issue:  https://mnscug.org/blogs/sherry-kissinger/477-configmgr-current-branch-topic-id-611-swd-state-messages-flood, where Console admins accidentally checking the box for "Use Server Groups", where all members of that collection then start sending state messages every 1 minute, failing to patch, and potentially affecting State Message processing ... that occurred again in our environment.  We got lax in interactively checking for whether or not a collection accidentally had that done.  Since we have System Center Operations Manager for monitoring, and SCOM can trigger on EventIDs, we made a custom SQL Job to run multiple times a day, and if any collections (not already exempted/expected to have that setting) were to get that setting, that job will drop an Error into the Application Event Log on the Server.  A custom SCOM rule, and we'll now get alerted if this happens again.

In case someone else might find this useful, here is what we did. 

1) TEST first.  Create a collection with 1 member in it, and set that checkbox (so that you can test).  In SSRS itself, as a new query run this...modifying both of the "c3.name not in " entries for any known exceptions your environment already has.  Use Server Groups might be a valid and expected setting for servers in a Server Cluster, for example.  This is only to help assist when that setting is accidentally checked.

if (select case when count(c1.UseCluster) > 0 then 0 else 1 end as [result]
from CEP_CollectionExtendedProperties c1
join collections_g c2 on c2.collectionid=c1.collectionid
join v_collection c3 on c3.collectionid=c2.siteid
where c1.UseCluster=1
and c3.name not in ('Known Collection Name that Should Have that option checked','Another Collection that should be OK'))=0
BEGIN
 DECLARE @CollectionIDAndName nvarchar(200) = (
  select top 1 c2.siteid + ', ' + c3.name as [ValueToLog] from CEP_CollectionExtendedProperties c1
   join collections_g c2 on c2.collectionid=c1.collectionid
   join v_collection c3 on c3.collectionid=c2.siteid
   where c1.UseCluster=1
   and c3.name not in ('Known Collection Name that Should Have that option checked','Another Collection that should be OK')
)
 DECLARE @VALUE nvarchar(MAX) = ('At least 1 collection has Use Server Group Enabled: ' + @CollectionIDAndName + ' To resolve, either edit the SQL Job on the server to include this collection as a known exception, or edit that collection and uncheck Use Server Group. A possible consequence of that setting is machines in that collection may be unable to patch, and may also cause a state message backlog due to submitting state messages every minute')
 RAISERROR (@Value, 16, 1) with LOG
END

When you run that against your cm_xxx database in SSRS, because you created a test collection with that option checked, you should get an Error message in the Application Event Log on the server holding the cm_xxx database.  To test again, edit the properties of the collection and uncheck the box.  Then re-run the query to confirm it does NOT create an Application Event Log error message.  Once you have confirmed that an Event Log Error entry is created and works as you expect it to work, you can continue.

2) In SQL Server Management Studio, Create a new job, give it any name; but make it one you might understand when you or a coworker looks at sql jobs and is trying to make sense of it a year from now.  The Owner is your known standard owner, if you have a standard.  If you do not have a specific standard, try using sa or NT Authority\System.  (You just don't want to use your own individual personal ID--that is poor practice) You may also want to input as much of a description as possible.  For Steps, there will be only 1 step.
Type:  Transact-SQL script (T-SQL)
Run as:  <blank>
Database:  Your CM_xxx database
into the Command: area, input your entire tested sql script you tested successfully.

Under Advanced--use whatever Actions are your standards.  If you don't have a specific standard, we have "on success, quit the job reporting success", on Failure "Quit the job reporting failure".  0 retries, no output file, no log to table, run as user is blank.

Save the job (with no schedules for now).

3) test your job by editing the properties on a test collection and setting the "use Server Group" enabled.  Then in SQL SSRS, Jobs, right-click your job and  Run the Job from Step... and confirm you do get a new EventLog entry.  Remove the setting, and test again--ensuring you do NOT get a new EventLogEntry

4) Once you are satisfied it works as expected, using your favorite monitoring tool, whatever that might be (for us it was SCOM), set it up to monitor for Application Event Log, SOURCE: MSSQLSERVER  EventID: 17063  (that's what we got... confirm that is what you get in Application Event Log--I assume that is universal... but I've been wrong before, many many times)

5) Confirm your monitoring tool correctly reports on any new events, by again enabling Use Server Group on a test collection, run the job, removing, run the job.

6) Once confirmed; edit the job and add a Schedule or Schedules.  How frequently you would like this to run, and potentially alert on the issue, is up to your own standards and discretion.

Done!  With this in place, we hope we will get monitoring alerts about this situation... instead of alerts about a state message backlog.


WSUS Administration, WSUSPool, web.config, settings enforcement via Configuration Items

$
0
0

Just in case you got here via other means, I recommend checking out this information as well, with links to more maintenance routines and information--> https://deploymentresearch.com/Research/Post/665/Fixing-WSUS-When-the-Best-Defense-is-a-Good-Offense

A recent new storm (see here, Jeff Carreon's blog) of Clients’ software update scanning issues had our team re-evaluating all of our existing wsuspool , web.config, and wsus administration settings on our Software Update points. As some of you CM Admins may recall, there has been multiple times over the last few years where due to various and assorted things outside of our control, like metadata size for Windows Updates, we as CM admins have had to manually adjust settings in WSUS configuration. At my company, we had a few Configuration Items enforcing and monitoring those settings, but not all of them.

The below settings may or may not be the ones which are the settings YOU have in your environment. You are free to take them and modify them for whatever makes the most sense for your particular configuration. For us, these are the settings we’ve determined we want to monitor and enforce… at least at this time. It seems like the settings required for a healthy SU scanning environment seem to change every 18 months—at least here—so don’t presume these settings are the end-all be-all of golden configuration. Everyone is different, and things change. But hopefully, if you do need to monitor or set these in your environment, these examples can assist you in getting to your Golden Configuration.

Keep in mind this blog is written from the standpoint of a System Center Configuration Manager person, who has created these rules using what is called “Configuration Items”. The way I structured it was One Rule Per Configuration Item, at my company. However, one could just as easily have created one Configuration Item—and have multiple rules inside it. It’s more personal preference than anything else in this case. In my case, since I was building and testing the rules one by one, I wanted a clear separation in case I messed something up while testing in the lab.

“The Easy Button” – I exported the rules from my lab, and they are available --> here <--. You can import them into your System Center Configuration Management console, and adjust or review them there, and test them.  The exported rules are 14 CIs. To deploy them, you'd create a Baseline, add all 14 CIs to the Baseline.  Prior to saving the Baseline, modify each rule to be "optional" instead of "required".  Once saved, deploy the baseline to a test machine in "monitor" mode only--just to see what differences you might have.  You may want to modify multiple rules, or discard some as not necessary for you to monitor/remediate.  Once you are comfortable, deploy with 'remediation' and test again.

Occasionally, I’ve had requests to list out exactly what is inside each rule, usually because for whatever reason the import isn’t working as expected so the recipients can’t see what’s inside. So here is what is inside each Configuration Item… this will be long, but hopefully useful!

First of all, for each Configuration Item, I elected to make it an “Application” type. That because even though I’ve targeted the baseline just to the servers which I ‘know’ have the WSUS Feature installed, if someone were to accidentally deploy a baseline containing these rules to other devices, I want the CI to ‘not bother’ to run the CI test or remediation inside. It would be pointless and potentially confusing to people looking at reports. The “Detection” script for the Application is a Powershell Script type, this one line:

(Get-WmiObject -Namespace root\cimv2 -class win32_serverfeature -Filter "Name = 'WINDOWS SERVER UPDATE SERVICES'").Name

If anything is returned (the name), then the CI will assume the application is installed, and it will continue. If nothing is returned, it won’t continue.

Second, just about every single test requires the powershell model webadministration, using import-Module webadministration

Hopefully, your WSUS / Software Update point servers by default were installed with the add-windowsFeature Web-Server.Web-Scripting-Tools, but if you try to run these scripts interactively on a Software Update Point server, you get ‘failed to import webadministration module’, you’ll want to confirm that the web-scripting-tools component is installed—otherwise none of this will work.

Below are 14 Configuration Items we’re currently testing in our lab environment (will be moved to production once testing is complete). If you can’t import the .cab file attachment, worst case you can create your own Configuration Items using the below information.

 

CI 1

 

Title:

WSUS Administration Max Connections Should be Unlimited

Detection:

import-Module webadministration ; (get-itemproperty IIS:\Sites\'WSUS Administration' -name limits.maxConnections.Value)

Remediation Script:

import-Module webadministration ; set-Itemproperty IIS:\Sites\'WSUS Administration' -Name limits.maxConnections -Value 4294967295

Compliance Rule:

4294967295

CI 2

 

Title:

WSUS Administration MaxBandwidth should be unlimited

Detection:

import-Module webadministration ; (get-itemproperty IIS:\Sites\'WSUS Administration' -name limits.maxbandwidth.Value)

Remediation Script:

import-Module webadministration ; set-Itemproperty IIS:\Sites\'WSUS Administration' -Name limits.maxBandwidth -Value 4294967295

Compliance Rule:

4294967295

CI 3

 

Title:

WSUS Administration TimeOut should be 320

Detection:

import-Module webadministration

(get-itemproperty IIS:\Sites\'WSUS Administration' -Name limits.connectionTimeout.value).TotalSeconds

Remediation Script:

import-Module webadministration ; set-Itemproperty IIS:\Sites\'WSUS Administration' -Name limits.connectionTimeout -Value 00:05:20

Compliance Rule:

320

CI 4

 

Title:

WSUS ClientWebService web.config executionTimeout should be 7200

Detection:

import-Module webadministration

$FullFileName = (Get-WebConfigFile 'IIS:\Sites\WSUS Administration\ClientWebService').fullname

[XML]$xml = Get-Content $FullFileName

((($xml.configuration).'system.web').httpRunTime).executionTimeout

Remediation Script:

import-Module webadministration

$FullFileName = (Get-WebConfigFile 'IIS:\Sites\WSUS Administration\ClientWebService').fullname

$acl = get-acl $FullFileName

$ar = NewObject system.security.accesscontrol.filesystemaccessrule("NT Authority\SYSTEM","FullControl","Allow")

$acl.SetAccessRule($Ar)

Set-ACL $FullFileName $acl

[XML]$xml = Get-Content $FullFileName

$ChangeThis = ((($xml.configuration).'system.web').httpRunTime)

$ChangeThis.SetAttribute('executionTimeout', '7200')

$xml.Save($FullFileName)

Compliance Rule:

7200

 

NOTE: when setting the Compliance Rule, also check the box for “Report NonCompliance if this setting instance is not found”, because in some default installs of wsus, the web.config file doesn’t have this particular value at all. Checking that box there would ensure that it would be added if missing completely.

CI 5

 

Title:

WSUS ClientWebService web.config maxRequestLength should be 20480

Detection:

import-Module webadministration

$FullFileName = (Get-WebConfigFile 'IIS:\Sites\WSUS Administration\ClientWebService').fullname

[XML]$xml = Get-Content $FullFileName

((($xml.configuration).'system.web').httpRunTime).maxRequestLength

 

Remediation Script:

import-Module webadministration

$FullFileName = (Get-WebConfigFile 'IIS:\Sites\WSUS Administration\ClientWebService').fullname

$acl = get-acl $FullFileName

$ar = NewObject system.security.accesscontrol.filesystemaccessrule("NT Authority\SYSTEM","FullControl","Allow")

$acl.SetAccessRule($Ar)

Set-ACL $FullFileName $acl

[XML]$xml = Get-Content $FullFileName

$ChangeThis = ((($xml.configuration).'system.web').httpRunTime)

$ChangeThis.maxRequestLength = "20480"

$xml.Save($FullFileName)

Compliance Rule:

20480

CI 6

 

Title:

WSUS Service Should Be Running

Detection:

(Get-Service "WSUS Service").Status

Remediation Script:

Start-Service "WSUS Service"

Compliance Rule:

Running

CI 7

 

Title:

WSUSPool Application Pool Should be Started

Detection:

import-Module webadministration ; (Get-WebAppPoolState WSUSPool).Value

Remediation Script:

import-Module webadministration ; Start-WebAppPool -Name "WSUSPool"

Compliance Rule:

Started

CI 8

 

Title:

WSUSPool CPU ResetInterval should be 15 min

Detection:

import-Module webadministration ; (get-itemproperty IIS:\AppPools\Wsuspool -Name cpu.resetInterval.value).minutes

Remediation Script:

import-Module webadministration ; set-Itemproperty IIS:\AppPools\Wsuspool -Name cpu -Value @{resetInterval="00:15:00"}

Compliance Rule:

15

CI 9

 

Title:

WSUSPool Ping Disabled

Detection:

import-Module webadministration ; (get-itemproperty IIS:\AppPools\Wsuspool -Name processmodel.pingingEnabled).value

Remediation Script:

import-Module webadministration ; set-Itemproperty IIS:\AppPools\Wsuspool -Name processmodel.pingingEnabled False

Compliance Rule:

False

CI 10

 

Title:

WSUSPool Private Memory Limit should be 0

Detection:

import-module webadministration

$applicationPoolsPath = "/system.applicationHost/applicationPools"

$appPoolPath = "$applicationPoolsPath/add[@name='WsusPool']"

(Get-WebConfiguration "$appPoolPath/recycling/periodicRestart/@privateMemory").Value

 

Remediation Script:

import-module webadministration

$applicationPoolsPath = "/system.applicationHost/applicationPools"

$appPoolPath = "$applicationPoolsPath/add[@name='WsusPool']"

Set-WebConfiguration "$appPoolPath/recycling/periodicRestart/@privateMemory" -Value 0

 

Compliance Rule:

0

CI 11

 

Title:

WSUSPool queueLength should be 30000

Detection:

import-Module webadministration ; (get-itemproperty IIS:\AppPools\Wsuspool | Select queuelength).queueLength

Remediation Script:

import-Module webadministration ; set-Itemproperty IIS:\AppPools\Wsuspool -name queueLength 30000

Compliance Rule:

30000

CI 12

 

Title:

WSUSPool RapidFail Should be Disabled

Detection:

import-Module webadministration ; (get-itemproperty IIS:\AppPools\Wsuspool -name failure.rapidFailProtection).Value

Remediation Script:

import-Module webadministration ; set-Itemproperty IIS:\AppPools\Wsuspool -name failure.rapidFailProtection False

Compliance Rule:

False

CI 13

 

Title:

WSUSPool Recycling Regular Time interval should be 0

Detection:

import-Module webadministration ; ((get-itemproperty IIS:\AppPools\Wsuspool -name recycling.periodicRestart.time).Value).TotalMinutes

Remediation Script:

import-Module webadministration ; set-Itemproperty IIS:\AppPools\Wsuspool recycling.periodicRestart.time -Value 00:00:00

Compliance Rule:

0

CI 14

 

Title:

WSUSPool requests should be 0

Detection:

import-module webadministration

$applicationPoolsPath = "/system.applicationHost/applicationPools"

$appPoolPath = "$applicationPoolsPath/add[@name='WsusPool']"

(Get-WebConfiguration "$appPoolPath/recycling/periodicRestart/@requests").Value

 

Remediation Script:

import-module webadministration

$applicationPoolsPath = "/system.applicationHost/applicationPools"

$appPoolPath = "$applicationPoolsPath/add[@name='WsusPool']"

Set-WebConfiguration "$appPoolPath/recycling/periodicRestart/@requests" -Value 0

 

Compliance Rule:

0

 

 

ConfigMgr Windows Update Agent Scan Results SQL queries

$
0
0

During the storms we received for WSUS / WUA Scanning in ConfigMgr during the last few weeks, in addition to Network traffic results, another way we could almost quickly tell if our Software Update Points were getting overwhelmed was by monitoring Scan Failures.  Attached --> Here <-- are a couple SQL Queries that should work in any environment.  Although the attached do not cover every possible scan result error code, it includes some error codes I've gleaned over the years from various sources.

If you have multiple Software Update Point servers, one of the reports inside can help you see if perhaps 1 particular server is being swamped but others are OK.  You can also adjust that report slightly--what is attached defaults to "client scan results in the last 24 hours".  When we were actively working the various issues--during several attempts where we thought 'A-ha! this might be it as the fix!', we could change it to "scan results in the last 1 hour" and see if successful scans were rising and failures decreasing.

Hopefully these sql queries might be a starting point for others caught in the same struggles of maintaining their WSUS / Software Update Points with all the challenges that seem to have been occurring in the last several months.

ConfigMgr RefreshServerComplianceState as a Configuration Item

$
0
0

State messages are great, because they are quickly processed.  However, it can (and does) occasionally happen where for network reasons, corrupt data, or other influences, some State Messages from your ConfigMgr Clients never make it from the client into your database.  Normally, that isn't a big deal--however, what does sometimes happen is those state messages are for Software Updates.  If you have people who look at reports for Software Updates, and some clients locally say they are Compliant for Software Update KB123456, but when you look at reports based on your database, for that same client, the database says KB123456 on that client is non-Compliant.  Read this: https://blogs.msdn.microsoft.com/steverac/2011/01/07/sccm-state-messagingin-depth/ for a much better explanation of why and how; but the short conclusion to that situation is you want to ask your clients to occasionally run what is referred to as a "RefreshServerComplianceState" locally.  Basically, you are asking your clients to resend the state messages about compliant/non-compliant for all existing Software Updates they are aware of locally, to ConfigMgr, your database.  aka.. exactly what it says on the tin.  Refresh Server Compliance State.

The short and sweet is that it's really just a line or two of vbscript or powershell code.  But if you are in a large environment, you often don't want to tell every single client to all send up state messages all on the same day.  It could POTENTIALLY be a lot of data, and backlog your servers' SQL processing.  It would eventually catch up... but why create a headache for yourself?

Below is a Powershell Script that you COULD choose to run... as a set-it-and-forget-it type of thing.  As-is, if you took the below and deployed it as a Powershell Script in a Configuration Item, and the Baseline were to run daily, your clients would around about 2x a year or so randomly decide to RefreshServerComplianceState .  If you want it more frequent; change the 180 to say... 90  (to be about every 3 months, or 60 to be about every 2 months). 

The below is just a suggestion, and you can take it and break it as you like.

<#

.SYNOPSIS

This routine is to generate a random number between 1 and "MaximumRandom". In general, it a Maximum Random

number will likely be 180; if the Configuration Item is run daily, approximately twice a year it is expected

that a client will randomly pick a value of 1, and trigger a RefreshServercomplianceState

.DESCRIPTION

- This script would likely be used by a Configuration Manager Administrator as a 'Configuration Item', as the

"Detection" script in that Configuration Item. The Administrator would set it up as a detect-only script, where

the "what means compliant" is that any value at all is returned.

- The Configuration Manager Administrator would likely add this to a baseline, and deploy that baseline to run

on a Daily basis to their windows-os based devices, which scan for or deploy patches using the Software Updates Feature.

- Using the MaximumRandom number of 180, presuming the baseline runs daily, approximately twice a year based on

random probabilities, a client will trigger to run the "ResetServerComplianceState". See the blow mentioned

below for why this is something a Configuration Manager Administrator might want to do this.

- If the Configuration Manager Administrator wants to make it randomly occur more frequently or less frequently,

they would either adjust the $MaximumRandom number higher or lower, or modify the frequency of the Baseline evaluation

schedule.

- For interactive testing, modify $VerbosePreference to 'Continue' to see what action was taken. Remember to change

it back to SilentlyContinue for live deployments.

- If a client does trigger, an EventLog entry in the ApplicationLog with an Information EventId of 555 from SyncStateScript

will be created. You can add or modify the -Message entry for the EventLog to be as verbose as you need it to be for

your own potential future tracking purposes. Perhaps you might want to add in specifics like "Configuration Item

Named <whatever> in the Baseline <whatever> triggered this action, this was originally deployed on <Date>"

Credits: Garth Jones for the idea.

https://blogs.msdn.microsoft.com/steverac/2011/01/07/sccm-state-messagingin-depth

for the reasons why it's a good idea to do so occasionally.

.NOTES

2018-05-06 Sherry Kissinger

$VerbosePreference options are

'Continue' (show the messages)

'SilentlyContinue' (do not show the message, this is the default if not set at all)

'Stop' Show the message and halt (use for debugging)

'Inquire' Prompt the user if ok to continue

#>

Param (

$VerbosePreference = 'SilentlyContinue',

$ErrorActionPreference = 'SilentlyContinue',

$MaximumRandom = 180,

$ValueExpected = 1

#ValueExpected Will likely always be 1, and never change; set as a parameter for ease of reporting.

)

$RandomValue = Get-Random -Maximum $MaximumRandom -Minimum 1

if ($RandomValue -eq $ValueExpected ) {

Write-Verbose "Random generated value of $RandomValue equals $ValueExpected, therefore RefreshServerComplianceState for ConfigMgr Client State Messages for Updates will be triggered."

$SCCMUpdatesStore = New-Object -ComObject Microsoft.CCM.UpdatesStore

$SCCMUpdatesStore.RefreshServerComplianceState()

New-EventLog -LogName Application -Source SyncStateScript -ErrorAction SilentlyContinue

Write-EventLog -LogName Application -Source SyncStateScript -EventId 555 -EntryType Information -Message "Configuration Manager RefreshServerComplianceState Triggered to Run. If questions on what this is for, refer to https://blogs.msdn.microsoft.com/steverac/2011/01/07/sccm-state-messagingin-depth/ "

}

else

{

Write-Verbose "Random generated value was $RandomValue, which does not equal $ValueExpected, RefreshServerComplianceState for ConfigMgr Client State Messages for Updates was not triggered. "

}

Write-Host 'Compliant'

Reporting on PST Files for Outlook using SCCM

$
0
0

This is an update to this: https://www.mnscug.org/blogs/sherry-kissinger/249-pstfinder .  The reason for the update is the old method (from 2013) worked for older versions of Outlook; but not for 2013 or newer.

With some clever scripting from different people, notably https://social.technet.microsoft.com/Forums/en-US/7ff6821c-b7cc-46e0-bc5a-342cfd9c0bf9/display-outlook-pst-file-location-on-remote-machines?forum=winserverpowershell , John Marcum and Sherry Kissinger, and we've got a routine that will, for the most part, answer those three questions.  The basics of how it works is this.  There are two vbscripts that run.  One runs as SYSTEM, and it's only purpose is to create a custom namespace in WMI, and grant permissions to all of your domain users to that custom namespace--so they can populate it with the results of script #2.  Script #2 runs, only when a user is logged in, with user rights.  That's because the majority of what the script needs to do is read information about that specific logged-in users Outlook configuration, and (potentially) any mapped drive information which may be referenced by the PST file location.

The results of the 2nd script end up in that custom WMI namespace, and will have the following information:

DateScriptRan = the exact date and time that the script ran to gather this user-specific information.
FileSizeinMB = If it could be detected, and the file size was 1mb or larger, the size of the PST.  If it's less than 1mb, or for whatever reason could not be detected, the value will be 0.
PSTFile = The DisplayName in Outlook
PSTLocation = The location as known to Outlook
Type = If it could figure out that Q: was a mapped network drive, it'll say 'Remote', otherwise it'll say local
UserDomain = whomever is logged in, what their domain is.
UserName = whomever is logged in, what their username is.
Location = This will either be a drive letter, or if it was possible to determine if that drive letter was really a mapped drive to a network location, the \\Server\Share will be populated.

End result:  After deploying these two scripts, you will be able to answer those pesky questions from your Exchange team about who, where, and how large, are referenced PST files.  Of course, the main limitation is this is per-user information.  If you have a lot of shared machines, or the same user has multiple computers (and connects to the same PST files on those multiple computers) you'll have to do some creative reporting to ensure you don't double-count the same PST files.

Ok, enough of how it works.  You really want to know *exactly* what to do, right?  Let's start!
 
Your Source folder for the package will contain 2 things:
CreateCustomCMClasses-RunAsSystem.ps1
PopulateWMI-RunAsUser.ps1

The .ps1 files are at this  -->link<--. 

You will need to make 1 change to "CreateCustomCMClasses-RunAsSystem.ps1", this line:
    [String]$Account   = 'YourDomainHere\Domain Users',
Modify that to be your domain (the domain your users are in that will be logging in and running script #2).

Create two programs; the first runs Powershell.exe -ExecutionPolicy Bypass -File CreateCustomCMClasses-RunAsSystem.ps1, whether or not a user is logged in, with Administrator rights.  The second runs Powershell.exe -ExecutionPolicy Bypass -File PopulateWMI-RunAsUser.ps1, only when a user is logged in, with user rights.  The 2nd one; you want to "run another program first", and have it run the first one.  It only needs to run the 1st program once, per computer; it doesn't need to re-run.

Advertise the 2nd program to a collection (I recommend a test/pilot first), and confirm that it works as you expect.  If you want to confirm the data is there, look in root\CustomCMClasses  (not root\cimv2) for cm_PSTFileInfo, that there are instances there for any Outlook-attached PST files for that user.

If you are satisfied it's there locally, import the below into Default Client Agent Settings, Hardware Inventory

[SMS_Report(TRUE),
 SMS_Group_Name("PSTFileInfo"),
 SMS_Class_ID("PSTFileInfo"),
 SMS_Namespace(FALSE),
 Namespace("\\\\\\\\localhost\\\\root\\\\CustomCMClasses")]

class cm_pstfileinfo : SMS_Class_Template
{
  [SMS_Report(TRUE)] string DateScriptRan;
  [SMS_Report(TRUE)] uint32 FileSizeinMB;
  [SMS_Report(TRUE)] string Location;
  [SMS_Report(TRUE)] string PSTFile;
  [SMS_Report(TRUE),key] string PSTLocation;
  [SMS_Report(TRUE)] string Type;
  [SMS_Report(TRUE)] string UserDomain;
  [SMS_Report(TRUE)] string UserName;
};


sit back, relax for a bit... then invoke a hardware inventory on your test boxes, and see if the data shows up in your database in v_gs_pstfileinfo0.  If so, deploy the advert to your real target collection of users or computers, and wait for the data to show up.  Depending upon your need for this information; you may or may not want to have the advert run on a recurring basis (weekly? monthly?) or just gather it for a week or so (just enough to answer the question) then delete the advert and change the Inventory from TRUE to FALSE (until the next time they ask).

Here's a potential sql report to get you started:

select sys.Name0 as [Computer Name],
pst.UserName0 as [User],
pst.PSTFile0 as [File Name],
pst.PSTLocation0 as [File Location],
pst.Type0 as [Local/Remote],
pst.Location0 as [Location],
pst.FileSizeinMB0 as [Size in MB],
pst.DateScriptRan0 as [Date Collected]
from v_R_System sys
Inner Join v_GS_PSTFileInfo0 pst on sys.ResourceID = pst.ResourceID
order by sys.Name0
 

Configuration Manager Collection Cleanup Suggestions

$
0
0

Certainly in your CurrentBranch Console, under "Management Insights", there are some things there regarding collection cleanup; but here's a few other ways to look at your data.

Over the years, Collection plaque and tartar just grows and grows... and over time, people forget what collections were made for, or why they have them.  As a way to help the people who use our console narrow it down a bit to 'possible' stale, old collections which no longer have any purpose, below is a potential starting point.

What the below would list is collectionids and names, which are:
- NOT Currently used for any other collection as a "limited to", "Include", or "Exclude"
- NOT Currently used for any Deployment, whether it's a baseline, an application, an advertisement, or a task sequence
- NOT Currently used to define a Service Window (aka Maintenance Window)
- NOT Currently used for any custom client agent settings you might have configured.
- NOT currently used for any collection variables you might have for OSD
- NOT currently used for Automatic Client Upgrade, as an excluded collection
- NOT a default/out of the box collection (aka, ones that start with SMS)

This isn't of course a definitive list.  For example, perhaps a collection was created to deploy "Really Important Application" 2 weeks ago... but the actual deployment hasn't happened yet--it's destined to begin next week.  In that case of course the collection might show up on this list--but it shouldn't be deleted--it has a future use.  But hopefully if your environment has a lot of collections and determining which ones might be safe to remove, this is a potential starting point.

Select c.collectionid, c.name [CollectionName]
from v_collection c
where
    c.collectionid not in (Select SourceCollectionID from vSMS_CollectionDependencies) -- include, excludes, or limited to
and c.collectionid not in (Select collectionid from v_deploymentsummary) -- any deployment, apps, advert, baseline, ts
and c.collectionid not in (Select Collectionid from v_ServiceWindow)
and c.collectionid not in (select collectionid from vClientSettingsAssignments)
and c.collectionid not in (select siteid from vSMS_CollectionVariable) -- OSD Collection Variables
and c.collectionid not in (Select a.ExcludedCollectionID from autoClientUpgradeConfigs a) -- ACU exclusion collection
and c.collectionid not in (select collectionid from v_collection where collectionid like 'sms%') -- exclude default collections

Another potential sql query for you to look for "collections not needed" could be this one.  What this one would be is it would sort, by "last time members changed in this collection".  The potential argument goes like this... even *if* that collection is being used for an active deployment... if the members of that machine based (not userbased) collection hasn't changed in years; how important is it to keep that particular deployment going / available?

;with cte as (select t2.CollectionName, t2.SiteID [CollectionID]
 ,(Cast(t1.EvaluationLength as float)/1000) as [EvalTime (seconds)]
 ,t1.LastRefreshTime, t1.MemberChanges, t1.LastMemberChangeTime,sc.SiteCode,
 case
  when c.refreshtype = 1 then 'Manual'
  when c.refreshtype = 2 then 'Scheduled'
  when c.refreshtype = 4 then 'Incremental'
  when c.refreshtype = 6 then 'Scheduled and Incremental'
 end as [TypeofRefresh]
,c.MemberCount,c.CollectionType
from dbo.collections_L t1 with (nolock)
join collections_g as t2 with (nolock) on t2.collectionid=t1.collectionid
join v_sc_SiteDefinition sc on sc.SiteNumber=t1.SiteNumber
join v_collection c on c.collectionid=t2.siteID
)
Select cte.collectionID, cte.CollectionName, CTE.[EvalTime (seconds)]
,Right(Convert(CHAR(8),DateADD(SECOND,CTE.[EvalTime (seconds)],0),108),5) [EvalTime (Minutes:Seconds)]
,cte.lastrefreshtime, cte.memberchanges, cte.lastmemberchangetime, cte.typeofrefresh, cte.membercount
from cte
where cte.collectiontype=2
and cte.collectionid not like 'SMS%'
order by lastmemberchangetime

Configuration Manager Current Branch FastChannel Information via SQL Query

$
0
0

A lot of people use the console--but I don't go in there that much.  I'm more of a query SQL kind of person.  Some of the updates lately for Current Branch have been leveraging the "FastChannel" for communications.  If you don't remember, originally the FastChannel was meant for quick-hit communications, primarily around Endpoint protection.  However, over the last several updates, the product team has been adding more communications over the fast channel.  Most of those communications are to make the console experience feel more "real time"--and I get that.  For people who live in the console.  but I don't... so where is that information and how can I use it... using SQL?

Here's a couple things to have in your SQL query backpocket.

If you are Current Branch 1710 or higher, the 1710 clients will communicate back about if they have 1 or more of 4 specific "reboot pending" reasons.  You can see that in console--but as a SQL report, here's a summary query to show you counts of devices and what reboot pending state (and why) they are in:

select cdr.ClientState [Pending Reboot],
Case when (1 & cdr.ClientState) = 1 then 1 else 0 end as [Reason: ConfigMgr],
Case when (2 & cdr.ClientState) = 2 then 1 else 0 end as [Reason: Pending File Rename],
Case when (4 & cdr.ClientState) = 4 then 1 else 0 end as [Reason: Windows Update],
Case when (8 & cdr.ClientState) = 8 then 1 else 0 end as [Reason: Windows Feature],
Count(*) [Count]
from vSMS_CombinedDeviceResources cdr
where CAST(right(left(cdr.ClientVersion,9),4) as INT) >= 8577 and cdr.clientversion > '1'
Group by cdr.ClientState
order by cdr.clientstate

It'll only tell you about clients which are version 8577 or higher (aka, 1710).  If you are absolutely certain all your clients are 1710 or higher, you can remove that section of the "where" clause.
asking for clientversion > 1 is because you "might" have mobile clients reporting to your CM.  You really only want to know about Windows-based clients.  Essentially, those where clauses are so that you can be a little more accurate about pending reboots.  If you have a lot of clients less than version 1710, they can't communicate their clientState via the FastChannel, so you might think "great, these devices don't have a pending reboot"--when what it really means is "these clients aren't able to tell me if they need a pending reboot, because their client version is not capable of telling me that, via this method".

Another piece of information that can come in via the Fast Channel, if you are using Current Branch 1806 or higher, 1806 clients can tell you about a CURRENTLY logged in user.  This differs from what we as SMS/ConfigMgr admins are used to in the past.  We have for years been able to tell "last logged on user" or "most likely primary user"--based on heartbeat, hardware inventory, or asset intelligence data.  But that could be "old news"--depending upon how frequent your heartbeat or inventory runs, it could be hours to days old information.  Current logged on user should be at worst a few minutes old (depending of course upon your size, and complexity)

select s1.netbios_name0 [ComputerName], cdr.CurrentLogonUser [Current Logged on User According to FastChannel]
from vSMS_CombinedDeviceResources cdr
join v_r_system s1 on s1.resourceid=cdr.machineid
oder by s1.netbios_name0

Visual Studio 2017 Editions using ConfigMgr Configuration item

$
0
0

This is a companion to https://mnscug.org/blogs/sherry-kissinger/416-visual-studio-editions-via-configmgr-mof-edit It *might* be a replacement for the previous mof edit; but I haven't tested this enough to make that conclusion--test yourself to see.

Issue to be resolved:  there are licensing groups at my company who are tasked with ensuring licensing compliance.  There is a significant difference between Visual Studio costs for Standard, Professional, and Enterprise.  Prior to Visual Studio 2017, that information was able to be obtained via registry keys, and a configuration.mof + import (see link above) was sufficient to obtain that information.

According to https://blogs.msdn.microsoft.com/dmx/2017/06/13/how-to-get-visual-studio-2017-version-number-and-edition/ (looks like published date is June, 2017), that information is no longer in the registry.  There is a uservoice published --> https://visualstudio.uservoice.com/forums/121579-visual-studio-ide/suggestions/19026784-please-add-a-documentation-about-how-to-detect-in <--, requesting that the devs for visual studio put that back--but there's no acknowledgement that it would ever happen.

So that means that us lonely SCCM Administrators, tasked with "somehow" getting the edition information to the licensing teams at our companies have to--yet again--find a way to "make it happen", using the tools provided.  So here's "one possible way". 

This has only been tested on ONE device in a lab... so it's probably not perfect.  Supposedly, using the -legacy switch it'll also detect "old versions" installed--but I have no idea if that works or not.  Might not.

Here's how I plan on deploying this...

1)  configuration Item, Application Type.
    a) 'Detection Method", use a powershell script... this may not be universal, but currently in my lab, this location of 'vswhere.exe' is consistently in the same place.  Here's hoping it'll not change.  So the detection logic for the CI to bother to run at all would be "do you have vswhere.exe where I think it should be":

 $ErrorActionPreference = 'SilentlyContinue'
 $location = ${env:ProgramFiles(x86)}+'\Microsoft Visual Studio\Installer\vswhere.exe'
 if ([System.IO.File]::Exists($location)) {
  write-host $location
  }

    b) Setting, Discovery Script, see the --> attached <-- .ps1 file.  Compliance Rule would be just existential, any result at all.
2)  Deploy that CI in a Baseline, as 'optional'; whether or not I just send it to every box everywhere, or create a collection of machines with Visual Studio 2017 in installed software--either way should work.
3)  Once Deployed and a box with Visual Studio 2017 has run it, confirm that a sample box DOES create a root\cimv2, cm_vswhere class, and there is data inside.
4)  Enable inventory
    a) In my SCCM Console, Administration, Client Settings, right-click Default Client Settings, properties
    b) Hardware Inventory, Set Classes...
    c) Add...
    d) Connect... to "the computer you checked in step 3 above; where you confirmed there is data locally on that box in root \cimv2, cm_vswhere"  and root\cimv2
    e) find the class "cm_vswhere"  check the box, OK. OK. OK.
5) monitor
    a) on your primary site, <installed location for SCCM>\Logs, dataldr.log 
    b) It'll chat about pending adds in the log.  Once that's done, you'll see a note about how it made some views for you.  "Creating view for..."
6) Wait a day, and then look if there is any information in a view probably called something like... v_gs_cm_vswhere.  But your view might have a different name--you'll just have to look.
    a) if you're impatient, back on that box from step 3 above, do some policy refreshes.  then a hardware inventory.
5) End result, you should get information in the field "displayName0", like "Visual Studio Professional 2017", and you'll be able to make custom reports using that information.  Which should hopefully satisfy your licensing folks.

To reiterate... tested on ONE box in a lab.  Your mileage my vary.  Additional tweaks or customizations may be needed to the script.  That's why in the script I tried to add a bunch of 'write-verbose'.  If you need to figure out why something isn't working right, change the VerbosePreference to Continue, not SilentlyContinue, and run it interactively on a machine--to hopefully figure out and address any un-anticipated flaws.


Politely Schedule restarts of CCMExec Service

$
0
0

Over the years of troubleshooting the SCCM Client, even with the built-in CCMEval task to attempt to watch and remediate client health of the SCCM Client, experience has shown to those of us in the trenches that sometimes, despite everything else, simply restarting the SMS Agent Host (aka, ccmexec service) will clear previously inexplicable issues.  A service restart is often less disruptive to the end user than saying "have you tried a reboot yet".

If that scenario is something you've encountered in your environment, or you just want to be proactive (like some other companies) one way to accomplish an 'SMS Agent Host' restart is to ask the ccmeval task to do that for you.  Kent Agerlund very kindly shared with me the edits they've done for their customers; and by doing so on their customers it was determined that overall, issues were reduced with the sccm client.

I've taken his edits, and created a couple of Configuration Items.  It's the ccmeval.xml which indicates what tests should be run by the ccmeval scheduled task.  Two tasks are added to ccmeval.xml:
- Restart CCMExec.exe-Stop
- Restart CCMExec.exe-Start

There are two --> attached <-- Configuration items.  One is to modify the ccmeval.xml to add the stop/start actions.  The other is to return the ccmeval.xml back to the original values (as of version Current Branch 1806 clients, but the ccmeval.xml hasn't changed in years, so it is anticipated it won't change in future version... but nothing is certain). 

What you would do to test:

  • Create a Baseline, let's call it "CCMEval Add Action to Restart CCMEXEC".  Add ONLY the 1 Configuration Item, 'ccmeval.xml Add Service Restart', make it optional (not required).
  • Deploy that baseline to a collection of TEST computers; to run daily, make sure you check the box for Remediation (not just monitor).
  • On the client
    • after the baseline of "CCMEval Add Action to Restart CCMEXEC" has run, go look at ccmeval.xml (it's usually in %windir%\ccm folder); and you should see the new actions have been added.
    • if you are patient--wait overnight.  The next day check in %windir%\ccm, for ccmevalreport.xml  Open up that file and look for the actions of "Restart CCMExec.exe-Stop." and "Restart CCMExec.exe-Start."  and they should have resultcodes of 0 (success).  You might also want to take note of the time that ccmevalreport.xml was created.  and then go look at %windir%\ccm\logs, for example, ccmexec.log or clientidmanagerstartup.log for entries around that time--you should notice that the logs indicated a service restart.
    • if you are NOT patient... from cmd-prompt-as-admin, you can run ccmeval.exe from %windir%\ccm, and then look at the files and results as indicated above.

PARANOIA TESTING

  • Remove the Deployment of the baseline "CCMEval Add Action to Restart CCMEXEC" to your test collection.
  • Create a Baseline called... "CCMEval Return to original", and add just and only 'ccmeval.xml Return to Original', make it optional (not required).
  • Deploy the baseline to your collection of Test Computers, to run daily, make sure you check the box for Remediation (not just monitor)
  • Confirm the ccmeval.xml gets set back to no longer have to 2 additional tasks
  • Manually run ccmeval.exe after the xml is changed back, and/or wait overnight, to confirm that ccmeval runs, and no longer restarts the ccmexec service.
  • Remove the Deployment of the Baseline "CCMEval Return to Original" (hopefully you'll never need this again... but...)
  • Once you've satisfied yourself that you can not only modify the ccmeval.xml, but also return it to a pre-changed condition, then you will be confident (hopefully) to move forward.

Your next step (if you choose to go forward) is to deploy the "CCMEval Add Action to Restart CCMEXEC" to a collection of targets.

One thought...I personally would not deploy the xml change to any Server OS, and definitely not any of my SCCM Servers--because the Management Point processes use ccmexec.  Restarting ccmexec on a Management Point role server might be fine... only you can say what makes sense in your infrastructure.  If you restart SMS Agent Host on your Management Point Role servers outside of a reboot, what does that impact for you?  anything?  if no impacts, then sure.  But YOU need to test, test, test. 

You may be asking yourself why this blog article was titled 'politely' ... that's because the ccmeval scheduled task is designed to only run when the client isn't doing other important things, and the system is quiet.  By design ccmeval tries to be quiet and discreet about when it runs, and randomized.

Use CM Console scripts node to gather log files from CM Clients

$
0
0

To assist in answering a question in this forum post:
https://social.technet.microsoft.com/Forums/en-us/9017aca5-06aa-4a79-a034-a646b19b89fe/collecting-log-files-from-the-client?forum=configmgrcbgeneral

I'm blogging on behalf of Srikant Yadav; he gave me permission to do so.  Thanks Srikant! 

How to make this work..

Step 1:
Make a location on a server you manage/control--which has lots of space.

create a folder called (for example):

E:\ClientLogs
Share that out as ClientLogs$
At a minimum, you need these permissions (if you have multiple domains, or support non-domain joined computers, you'll have to figure out what other permissions might be necessary).

 For share permissions, because who will be 'copying' the logs to that share is a computer, add the group:
  <your domain where your computers live>\Domain Computers, with Change, Read.
 On that folder of E:\ClientLogs, for NTFS permissions, add Modify, Read & Execute, List folder contents, read, Write (aka, everything but full control) to
  <that same group you just did for share permissions, aka, \Domain Computers

Step 2:
In the --> attached <-- is a script.  Modify the parameter within that script which is currently...
$Destination = "\\<DummyShare>\ClientLogs$"

To be  \\YourServer\ClientLogs$

Save that modified script as <some location you'll remember>\ImportThisIntoCM.ps1

Step 3:
In your CM Console, go to software library, scripts, create script
ScriptName = Retrieve Client Logs
Script Language = Powershell
Import... and go to <some location you just said you'd never forget> and import that ImportThisIntoCM.ps1 script.
Next
Review the Script Parameters.  You can, if you wish, modify the defaults of the parameters here.  For example, maybe you ALWAYS want to get any ccmsetuplogs, or you know you only want log files that will be within the last 5 days and nothing older.
double-check the Destination is the right servername and sharename
Next, Next, Close.

Step 4:
Approve the script in the Scripts Node.  You may need a peer to do the approval.  In smaller environments, if you are the only admin, you can self approve scripts in the Scripts node if you've configured that in Site Configuration, Site, Hierarchy Settings, uncheck "do not allow script authors to approve their own scripts".  This is a safety feature, that you SHOULD leave checked--because scripts can be powerful.  Some disgruntled admin COULD make a "format c:" type of script, self approve it, and send it as they walk out the door.  Just saying... you might not want to do this.  peer review of scripts is GOOD.

Step 5:
Use it!
As an example, in Assets and Compliance, Devices, pick on a Online device (obviously this only works if the target is online/available), right-click, Run Script.  Pick "Retrieve Client Logs".  At this point, you can change parameters as needed.  Next/next.  You'll see a progress bar. 

When it's done, in the \\yourserver\ClientLogs$ will be Subfolders; CMClientLogs$ for cmclientlogs, WindowsUpdateLogs$ for WindowsUpdateLogs, etc.  Inside those subfolders will be the zipped-up log files, named for the device name targeted.

Step 6:
Have a Cleanup Routine.  The \\YourServer\ClientLogs$ doesn't have any automatic cleanup routine around it.  If say... you were to gather log files all the time, wherever that location exists might fill up the drive.  You want to remember to clear that out either manually occasionally, or setup some kind of maintenance routine on that server to "delete xx when older than yy days" or something.

Possible updates...If you read through the script, you'll see that you can make this extensible yourself.  Perhaps you have a <App Specific to our type of business> which has log files that are always stored in c:\programdata\Widgets\Logs.  You can easily add a new section to the script, with another parameter to grab those log files as needed, if needed.

Inventory Per User Installed Applications, For Example, Click-Once

$
0
0

This routine has only had a limited life in a lab environment with only 3 clients.  Use at your own risk, etc. etc.  No promises or guarantees, and it might be the Worst Thing Ever.  Test, test, and test some more. 

What this routine would be for, is a custom powershell script, which tries to read what per-user installed things are installed, for the currently logged in user.  I tried in the lab to run it as a Baseline/Compliance Item... but one of the problems with it is that although running as 'SYSTEM', it wants to look at whatever user is currenly logged in.  As a Baseline, it won't 'wait for a user to logon' to run.  So depending upon when it runs, it might run and make an empty custom class with nothing to say, simply because the user is currently not logged on--even though they are logged on 8 hours a day, it just happened to run within the other 16 hours of that day.

So you, Super CM Admin that you are, you might want to forget about doing this as a baseline.  Instead make the powershell script as the only thing in the source folder for a package.  And then, make a old school/traditional Package, and program.  the program would run the script, "only when a user is logged on", but "with system rights".  and deploy the program to a collection.  If it were me... I'd set the advertisement to run on a schedule, like every 4 days or something.  Note I didn't test this at all in my lab.  I'm just offering this out there into the ether for (hopefully) someone else to take this and make it awesome and bulletproof. 

What the script does is create, and populate, a custom class. 

In the --> attached <-- is also a mof file.  You'd want to go to your console, Administration, Client Settings, Default Cient Settings, Hardware Inventory, set classes, and Import that mof file.  Once that is done, clients will be able to start reporting back on this information.

 

ConfigMgr Truncate History Tables

$
0
0

Thanks very much to Umair Khan, Twitter @TheFrankUK, for the assist!  One of the hiccups recently was making sure to exclude "globaldata" type HIST tables, so that DRS replication doesn't want to go into MAINTENANCE_MODE and re-initialize global data.

Create ConfigMgr Powershell Configuration Items using Powershell

$
0
0
As part of a presentation for the 2019 Midwest Management Summit in Minneapolis, one of the sessions I'm presenting with Jeff Bolduan is Configuration Items.  As part of that session, we'll be demoing using a PowerShell Script to create PowerShell-based Configuration Item.
 
If you want to see how that works (at least it works in my lab) --> Here <-- is the script for creating a Configuration Item with multiple tests inside, where the CIs are posh-based detection, applicability, and remediation scripts.  For demo purposes, I grabbed the scripts from the blog --> about WSUS Administration/WSUSPool <-- settings enforcement via Configuration Items, and got them all working to be made into 1 Configuration Item, with multiple rules.
 
Hopefully for those of you who are looking to create your own re-producible PowerShell code for creating posh-based CIs, the attached example posh will give you an idea of how you might want to get that done.

ConfigMgr MaxExecutionTime Guesses for Updates

$
0
0

There is a situation which MIGHT happen for you.  The default for Cumulative Updates is, I believe 60 minutes now.  But many updates are still defaulting to 10 minutes.  I don't personally think that default should change, however, occasionally there are large updates (think Microsoft Office updates) which might be several hundred GB in size, and might take more than 10 minutes to install.  In your reporting, and when looking at local logs, the CM client says the install "Failed", but all you do is a re-scan for updates, and CM says it's installed.  So what gives, you wonder?  Well, this could be a possible reason.  It's not that the install 'failed' per se.  But after 10 minutes, the CM client stopped 'watching for' the successful install.  It timed out kind of.  Since I noticed a pattern that "it's usually when those updates are ginormous, they take longer to install", below is a POSSIBLE sql query to perhaps help you find and adjust the "Max Execution Timeout" on any individual updates.

A couple of pre-requisites.  Naturally, the content has to be downloaded. So if you run this 5 minutes after a "hotfix Tuesday" sync, it might not have much to say.  Because the content hasn't been downloaded to calculate "how big" any particular update is.  So you do have to wait until your content is downloaded to track these down.

Also note that I haven't created any kind of "powershell script" to automatically adjust the Max Execution Timeout.  This is just a report, and the admin would either posh-script changing each individual update, or use the console, find each update, right-click on it and in properties for that update, adjust up the max Execution Timeout to fit.

Also note these "suggestions" are just that, suggestions.  There is no right or wrong answer for how long Max Execution Timeout should be.  Leaving it all alone as-is with no changes from what you have will still work just fine.  One of the problems you may encounter might discourage you from touching or doing anything with this at all could be this following scenario...  Here's the scenario where following these suggestions would be a big bad horrible idea.  Let's say you allow your devices to have a service window every night for 4 hours.  Then you follow these suggestions, and for whatever reason, there were 8 different Office updates, and you changed them all from 10 minutes to 60 minutes each... for a total of 8 hours estimated time to install.  A client, when it gets the Software Update deployment, when it used to think "ok, these 8 will take me 80 minutes, I can do that in my 4 hour window, let's start!".  It'll start installing, and maybe it only gets 3 done... but it does get 3 done.  If you set them to 60 minutes each, the client might decide "wow, 8 hours... I can't do that in my service window... I'll just wait until I have 8+ hours to get this done".  and of course... it may never install any of them.  So be careful in deciding whether or not this is a potentially BAD idea, for your environment.  Or at least be aware of the potential repercussions, so you know what to un-do.

What this sql does, is list for Updates released in the last 30 days, and content has been downloaded, kind of look at the maxexecutiontime set, vs. how big the content is.  and if, for example, the content size is between 50 and 100mb, but it's maxexecutiontime isn't 20 minutes or more, then maybe you the admin might want to think about making MaxExecutionTime on that specific update to be 20 minutes--so you don't get false "I failed to install" reports which a re-scan will address.

Again... this isn't perfect.  It's just a possible suggestion, if you maybe have seen this behavior in your Software Updates deployments, and were wondering if there was a way to be pro-active about increasing the MaxExecutionTime without waiting for your reports to tell you the next day.

DECLARE @StartDate datetime = DateADD(Day, -30, GETDATE())
DECLARE @EndDate datetime = GetDate()

;with cte as (select ui.MaxExecutionTime/60 [Max ExecutionTime in Minutes], ui.articleid, ui.title, ui.DateLastModified, ui.DatePosted
,ui.IsSuperseded, ui.IsExpired
,(SUM(files.FileSize)/1024)/1 as [Size in KB]
,(SUM(files.FileSize)/1024/1024)/1 as [Size in MB]
from v_updateinfo ui
join v_UpdateContents content on content.CI_ID=ui.CI_ID
join vCI_ContentFiles files on files.Content_ID=content.Content_ID
where severity is not null
and content.ContentProvisioned = 1
and ui.dateposted between @StartDate and @EndDate
and ui.IsExpired = 0
group by ui.MaxExecutionTime, ui.articleid, ui.title, ui.DateLastModified, ui.dateposted, ui.IsSuperseded, ui.IsExpired
)

select
Case when cte.[Size in MB] < 50 and cte.[Max ExecutionTime in Minutes] >= 10 then 0
when cte.[Size in MB] BETWEEN 50 and 100 and cte.[Max ExecutionTime in Minutes] >= 20 then 0
when cte.[Size in MB] between 100 and 150 and cte.[Max ExecutionTime in Minutes] >= 30 then 0
when cte.[Size in MB] between 150 and 200 and cte.[Max ExecutionTime in Minutes] >= 40 then 0
when cte.[Size in MB] between 200 and 250 and cte.[Max ExecutionTime in Minutes] >= 50 then 0
when cte.[Size in MB] between 250 and 300 and cte.[Max ExecutionTime in Minutes] >= 60 then 0
when cte.[Size in MB] > 300 and cte.[Max ExecutionTime in Minutes] >=90 then 0
else 1
End as [Could use MaxExecutionTime Adjustment],
case when cte.[Size in MB] < 50 then '10 minutes'
when cte.[Size in MB] BETWEEN 50 and 100 then '20 minutes'
when cte.[Size in MB] between 100 and 150 then '30 minutes'
when cte.[Size in MB] between 150 and 200 then '40 minutes'
when cte.[Size in MB] between 200 and 250 then '50 minutes'
when cte.[Size in MB] between 250 and 300 then '60 minutes'
when cte.[Size in MB] > 300 then '90 minutes'
end as 'time to set'
, cte.*

from cte
order by [Could use MaxExecutionTime Adjustment] desc, [Time to set] desc

Configuration Manager Collection Cleanup Suggestions

$
0
0

Certainly in your CurrentBranch Console, under "Management Insights", there are some things there regarding collection cleanup; but here's a few other ways to look at your data.

Over the years, Collection plaque and tartar just grows and grows... and over time, people forget what collections were made for, or why they have them.  As a way to help the people who use our console narrow it down a bit to 'possible' stale, old collections which no longer have any purpose, below is a potential starting point.

What the below would list is collectionids and names, which are:
- NOT Currently used for any other collection as a "limited to", "Include", or "Exclude"
- NOT Currently used for any Deployment, whether it's a baseline, an application, an advertisement, or a task sequence
- NOT Currently used to define a Service Window (aka Maintenance Window)
- NOT Currently used for any custom client agent settings you might have configured.
- NOT currently used for any collection variables you might have for OSD
- NOT currently used for Automatic Client Upgrade, as an excluded collection
- NOT a default/out of the box collection (aka, ones that start with SMS)

This isn't of course a definitive list.  For example, perhaps a collection was created to deploy "Really Important Application" 2 weeks ago... but the actual deployment hasn't happened yet--it's destined to begin next week.  In that case of course the collection might show up on this list--but it shouldn't be deleted--it has a future use.  But hopefully if your environment has a lot of collections and determining which ones might be safe to remove, this is a potential starting point.

Select c.collectionid, c.name [CollectionName]
from v_collection c
where
    c.collectionid not in (Select SourceCollectionID from vSMS_CollectionDependencies) -- include, excludes, or limited to
and c.collectionid not in (Select collectionid from v_deploymentsummary) -- any deployment, apps, advert, baseline, ts
and c.collectionid not in (Select Collectionid from v_ServiceWindow)
and c.collectionid not in (select collectionid from vClientSettingsAssignments)
and c.collectionid not in (select siteid from vSMS_CollectionVariable) -- OSD Collection Variables
and c.collectionid not in (Select a.ExcludedCollectionID from autoClientUpgradeConfigs a) -- ACU exclusion collection
and c.collectionid not in (select collectionid from v_collection where collectionid like 'sms%') -- exclude default collections

Another potential sql query for you to look for "collections not needed" could be this one.  What this one would be is it would sort, by "last time members changed in this collection".  The potential argument goes like this... even *if* that collection is being used for an active deployment... if the members of that machine based (not userbased) collection hasn't changed in years; how important is it to keep that particular deployment going / available?

;with cte as (select t2.CollectionName, t2.SiteID [CollectionID]
 ,(Cast(t1.EvaluationLength as float)/1000) as [EvalTime (seconds)]
 ,t1.LastRefreshTime, t1.MemberChanges, t1.LastMemberChangeTime,sc.SiteCode,
 case
  when c.refreshtype = 1 then 'Manual'
  when c.refreshtype = 2 then 'Scheduled'
  when c.refreshtype = 4 then 'Incremental'
  when c.refreshtype = 6 then 'Scheduled and Incremental'
 end as [TypeofRefresh]
,c.MemberCount,c.CollectionType
from dbo.collections_L t1 with (nolock)
join collections_g as t2 with (nolock) on t2.collectionid=t1.collectionid
join v_sc_SiteDefinition sc on sc.SiteNumber=t1.SiteNumber
join v_collection c on c.collectionid=t2.siteID
)
Select cte.collectionID, cte.CollectionName, CTE.[EvalTime (seconds)]
,Right(Convert(CHAR(8),DateADD(SECOND,CTE.[EvalTime (seconds)],0),108),5) [EvalTime (Minutes:Seconds)]
,cte.lastrefreshtime, cte.memberchanges, cte.lastmemberchangetime, cte.typeofrefresh, cte.membercount
from cte
where cte.collectiontype=2
and cte.collectionid not like 'SMS%'
order by lastmemberchangetime


Configuration Manager Current Branch FastChannel Information via SQL Query

$
0
0

A lot of people use the console--but I don't go in there that much.  I'm more of a query SQL kind of person.  Some of the updates lately for Current Branch have been leveraging the "FastChannel" for communications.  If you don't remember, originally the FastChannel was meant for quick-hit communications, primarily around Endpoint protection.  However, over the last several updates, the product team has been adding more communications over the fast channel.  Most of those communications are to make the console experience feel more "real time"--and I get that.  For people who live in the console.  but I don't... so where is that information and how can I use it... using SQL?

Here's a couple things to have in your SQL query backpocket.

If you are Current Branch 1710 or higher, the 1710 clients will communicate back about if they have 1 or more of 4 specific "reboot pending" reasons.  You can see that in console--but as a SQL report, here's a summary query to show you counts of devices and what reboot pending state (and why) they are in:

select cdr.ClientState [Pending Reboot],
Case when (1 & cdr.ClientState) = 1 then 1 else 0 end as [Reason: ConfigMgr],
Case when (2 & cdr.ClientState) = 2 then 1 else 0 end as [Reason: Pending File Rename],
Case when (4 & cdr.ClientState) = 4 then 1 else 0 end as [Reason: Windows Update],
Case when (8 & cdr.ClientState) = 8 then 1 else 0 end as [Reason: Windows Feature],
Count(*) [Count]
from vSMS_CombinedDeviceResources cdr
where CAST(right(left(cdr.ClientVersion,9),4) as INT) >= 8577 and cdr.clientversion > '1'
Group by cdr.ClientState
order by cdr.clientstate

It'll only tell you about clients which are version 8577 or higher (aka, 1710).  If you are absolutely certain all your clients are 1710 or higher, you can remove that section of the "where" clause.
asking for clientversion > 1 is because you "might" have mobile clients reporting to your CM.  You really only want to know about Windows-based clients.  Essentially, those where clauses are so that you can be a little more accurate about pending reboots.  If you have a lot of clients less than version 1710, they can't communicate their clientState via the FastChannel, so you might think "great, these devices don't have a pending reboot"--when what it really means is "these clients aren't able to tell me if they need a pending reboot, because their client version is not capable of telling me that, via this method".

Another piece of information that can come in via the Fast Channel, if you are using Current Branch 1806 or higher, 1806 clients can tell you about a CURRENTLY logged in user.  This differs from what we as SMS/ConfigMgr admins are used to in the past.  We have for years been able to tell "last logged on user" or "most likely primary user"--based on heartbeat, hardware inventory, or asset intelligence data.  But that could be "old news"--depending upon how frequent your heartbeat or inventory runs, it could be hours to days old information.  Current logged on user should be at worst a few minutes old (depending of course upon your size, and complexity)

select s1.netbios_name0 [ComputerName], cdr.CurrentLogonUser [Current Logged on User According to FastChannel]
from vSMS_CombinedDeviceResources cdr
join v_r_system s1 on s1.resourceid=cdr.machineid
oder by s1.netbios_name0

Visual Studio 2017 Editions using ConfigMgr Configuration item

$
0
0

This is a companion to https://mnscug.org/blogs/sherry-kissinger/416-visual-studio-editions-via-configmgr-mof-edit It *might* be a replacement for the previous mof edit; but I haven't tested this enough to make that conclusion--test yourself to see.

Issue to be resolved:  there are licensing groups at my company who are tasked with ensuring licensing compliance.  There is a significant difference between Visual Studio costs for Standard, Professional, and Enterprise.  Prior to Visual Studio 2017, that information was able to be obtained via registry keys, and a configuration.mof + import (see link above) was sufficient to obtain that information.

According to https://blogs.msdn.microsoft.com/dmx/2017/06/13/how-to-get-visual-studio-2017-version-number-and-edition/ (looks like published date is June, 2017), that information is no longer in the registry.  There is a uservoice published --> https://visualstudio.uservoice.com/forums/121579-visual-studio-ide/suggestions/19026784-please-add-a-documentation-about-how-to-detect-in <--, requesting that the devs for visual studio put that back--but there's no acknowledgement that it would ever happen.

So that means that us lonely SCCM Administrators, tasked with "somehow" getting the edition information to the licensing teams at our companies have to--yet again--find a way to "make it happen", using the tools provided.  So here's "one possible way". 

This has only been tested on ONE device in a lab... so it's probably not perfect.  Supposedly, using the -legacy switch it'll also detect "old versions" installed--but I have no idea if that works or not.  Might not.

Here's how I plan on deploying this...

1)  configuration Item, Application Type.
    a) 'Detection Method", use a powershell script... this may not be universal, but currently in my lab, this location of 'vswhere.exe' is consistently in the same place.  Here's hoping it'll not change.  So the detection logic for the CI to bother to run at all would be "do you have vswhere.exe where I think it should be":

 $ErrorActionPreference = 'SilentlyContinue'
 $location = ${env:ProgramFiles(x86)}+'\Microsoft Visual Studio\Installer\vswhere.exe'
 if ([System.IO.File]::Exists($location)) {
  write-host $location
  }

    b) Setting, Discovery Script, see the --> attached <-- .ps1 file.  Compliance Rule would be just existential, any result at all.
2)  Deploy that CI in a Baseline, as 'optional'; whether or not I just send it to every box everywhere, or create a collection of machines with Visual Studio 2017 in installed software--either way should work.
3)  Once Deployed and a box with Visual Studio 2017 has run it, confirm that a sample box DOES create a root\cimv2, cm_vswhere class, and there is data inside.
4)  Enable inventory
    a) In my SCCM Console, Administration, Client Settings, right-click Default Client Settings, properties
    b) Hardware Inventory, Set Classes...
    c) Add...
    d) Connect... to "the computer you checked in step 3 above; where you confirmed there is data locally on that box in root \cimv2, cm_vswhere"  and root\cimv2
    e) find the class "cm_vswhere"  check the box, OK. OK. OK.
5) monitor
    a) on your primary site, <installed location for SCCM>\Logs, dataldr.log 
    b) It'll chat about pending adds in the log.  Once that's done, you'll see a note about how it made some views for you.  "Creating view for..."
6) Wait a day, and then look if there is any information in a view probably called something like... v_gs_cm_vswhere.  But your view might have a different name--you'll just have to look.
    a) if you're impatient, back on that box from step 3 above, do some policy refreshes.  then a hardware inventory.
5) End result, you should get information in the field "displayName0", like "Visual Studio Professional 2017", and you'll be able to make custom reports using that information.  Which should hopefully satisfy your licensing folks.

To reiterate... tested on ONE box in a lab.  Your mileage my vary.  Additional tweaks or customizations may be needed to the script.  That's why in the script I tried to add a bunch of 'write-verbose'.  If you need to figure out why something isn't working right, change the VerbosePreference to Continue, not SilentlyContinue, and run it interactively on a machine--to hopefully figure out and address any un-anticipated flaws.

Politely Schedule restarts of CCMExec Service

$
0
0

Over the years of troubleshooting the SCCM Client, even with the built-in CCMEval task to attempt to watch and remediate client health of the SCCM Client, experience has shown to those of us in the trenches that sometimes, despite everything else, simply restarting the SMS Agent Host (aka, ccmexec service) will clear previously inexplicable issues.  A service restart is often less disruptive to the end user than saying "have you tried a reboot yet".

If that scenario is something you've encountered in your environment, or you just want to be proactive (like some other companies) one way to accomplish an 'SMS Agent Host' restart is to ask the ccmeval task to do that for you.  Kent Agerlund very kindly shared with me the edits they've done for their customers; and by doing so on their customers it was determined that overall, issues were reduced with the sccm client.

I've taken his edits, and created a couple of Configuration Items.  It's the ccmeval.xml which indicates what tests should be run by the ccmeval scheduled task.  Two tasks are added to ccmeval.xml:
- Restart CCMExec.exe-Stop
- Restart CCMExec.exe-Start

There are two --> attached <-- Configuration items.  One is to modify the ccmeval.xml to add the stop/start actions.  The other is to return the ccmeval.xml back to the original values (as of version Current Branch 1806 clients, but the ccmeval.xml hasn't changed in years, so it is anticipated it won't change in future version... but nothing is certain). 

What you would do to test:

  • Create a Baseline, let's call it "CCMEval Add Action to Restart CCMEXEC".  Add ONLY the 1 Configuration Item, 'ccmeval.xml Add Service Restart', make it optional (not required).
  • Deploy that baseline to a collection of TEST computers; to run daily, make sure you check the box for Remediation (not just monitor).
  • On the client
    • after the baseline of "CCMEval Add Action to Restart CCMEXEC" has run, go look at ccmeval.xml (it's usually in %windir%\ccm folder); and you should see the new actions have been added.
    • if you are patient--wait overnight.  The next day check in %windir%\ccm, for ccmevalreport.xml  Open up that file and look for the actions of "Restart CCMExec.exe-Stop." and "Restart CCMExec.exe-Start."  and they should have resultcodes of 0 (success).  You might also want to take note of the time that ccmevalreport.xml was created.  and then go look at %windir%\ccm\logs, for example, ccmexec.log or clientidmanagerstartup.log for entries around that time--you should notice that the logs indicated a service restart.
    • if you are NOT patient... from cmd-prompt-as-admin, you can run ccmeval.exe from %windir%\ccm, and then look at the files and results as indicated above.

PARANOIA TESTING

  • Remove the Deployment of the baseline "CCMEval Add Action to Restart CCMEXEC" to your test collection.
  • Create a Baseline called... "CCMEval Return to original", and add just and only 'ccmeval.xml Return to Original', make it optional (not required).
  • Deploy the baseline to your collection of Test Computers, to run daily, make sure you check the box for Remediation (not just monitor)
  • Confirm the ccmeval.xml gets set back to no longer have to 2 additional tasks
  • Manually run ccmeval.exe after the xml is changed back, and/or wait overnight, to confirm that ccmeval runs, and no longer restarts the ccmexec service.
  • Remove the Deployment of the Baseline "CCMEval Return to Original" (hopefully you'll never need this again... but...)
  • Once you've satisfied yourself that you can not only modify the ccmeval.xml, but also return it to a pre-changed condition, then you will be confident (hopefully) to move forward.

Your next step (if you choose to go forward) is to deploy the "CCMEval Add Action to Restart CCMEXEC" to a collection of targets.

One thought...I personally would not deploy the xml change to any Server OS, and definitely not any of my SCCM Servers--because the Management Point processes use ccmexec.  Restarting ccmexec on a Management Point role server might be fine... only you can say what makes sense in your infrastructure.  If you restart SMS Agent Host on your Management Point Role servers outside of a reboot, what does that impact for you?  anything?  if no impacts, then sure.  But YOU need to test, test, test. 

You may be asking yourself why this blog article was titled 'politely' ... that's because the ccmeval scheduled task is designed to only run when the client isn't doing other important things, and the system is quiet.  By design ccmeval tries to be quiet and discreet about when it runs, and randomized.

Use CM Console scripts node to gather log files from CM Clients

$
0
0

To assist in answering a question in this forum post:
https://social.technet.microsoft.com/Forums/en-us/9017aca5-06aa-4a79-a034-a646b19b89fe/collecting-log-files-from-the-client?forum=configmgrcbgeneral

I'm blogging on behalf of Srikant Yadav; he gave me permission to do so.  Thanks Srikant! 

How to make this work..

Step 1:
Make a location on a server you manage/control--which has lots of space.

create a folder called (for example):

E:\ClientLogs
Share that out as ClientLogs$
At a minimum, you need these permissions (if you have multiple domains, or support non-domain joined computers, you'll have to figure out what other permissions might be necessary).

 For share permissions, because who will be 'copying' the logs to that share is a computer, add the group:
  <your domain where your computers live>\Domain Computers, with Change, Read.
 On that folder of E:\ClientLogs, for NTFS permissions, add Modify, Read & Execute, List folder contents, read, Write (aka, everything but full control) to
  <that same group you just did for share permissions, aka, \Domain Computers

Step 2:
In the --> attached <-- is a script.  Modify the parameter within that script which is currently...
$Destination = "\\<DummyShare>\ClientLogs$"

To be  \\YourServer\ClientLogs$

Save that modified script as <some location you'll remember>\ImportThisIntoCM.ps1

Step 3:
In your CM Console, go to software library, scripts, create script
ScriptName = Retrieve Client Logs
Script Language = Powershell
Import... and go to <some location you just said you'd never forget> and import that ImportThisIntoCM.ps1 script.
Next
Review the Script Parameters.  You can, if you wish, modify the defaults of the parameters here.  For example, maybe you ALWAYS want to get any ccmsetuplogs, or you know you only want log files that will be within the last 5 days and nothing older.
double-check the Destination is the right servername and sharename
Next, Next, Close.

Step 4:
Approve the script in the Scripts Node.  You may need a peer to do the approval.  In smaller environments, if you are the only admin, you can self approve scripts in the Scripts node if you've configured that in Site Configuration, Site, Hierarchy Settings, uncheck "do not allow script authors to approve their own scripts".  This is a safety feature, that you SHOULD leave checked--because scripts can be powerful.  Some disgruntled admin COULD make a "format c:" type of script, self approve it, and send it as they walk out the door.  Just saying... you might not want to do this.  peer review of scripts is GOOD.

Step 5:
Use it!
As an example, in Assets and Compliance, Devices, pick on a Online device (obviously this only works if the target is online/available), right-click, Run Script.  Pick "Retrieve Client Logs".  At this point, you can change parameters as needed.  Next/next.  You'll see a progress bar. 

When it's done, in the \\yourserver\ClientLogs$ will be Subfolders; CMClientLogs$ for cmclientlogs, WindowsUpdateLogs$ for WindowsUpdateLogs, etc.  Inside those subfolders will be the zipped-up log files, named for the device name targeted.

Step 6:
Have a Cleanup Routine.  The \\YourServer\ClientLogs$ doesn't have any automatic cleanup routine around it.  If say... you were to gather log files all the time, wherever that location exists might fill up the drive.  You want to remember to clear that out either manually occasionally, or setup some kind of maintenance routine on that server to "delete xx when older than yy days" or something.

Possible updates...If you read through the script, you'll see that you can make this extensible yourself.  Perhaps you have a <App Specific to our type of business> which has log files that are always stored in c:\programdata\Widgets\Logs.  You can easily add a new section to the script, with another parameter to grab those log files as needed, if needed.

Inventory Per User Installed Applications, For Example, Click-Once

$
0
0

This routine has only had a limited life in a lab environment with only 3 clients.  Use at your own risk, etc. etc.  No promises or guarantees, and it might be the Worst Thing Ever.  Test, test, and test some more. 

What this routine would be for, is a custom powershell script, which tries to read what per-user installed things are installed, for the currently logged in user.  I tried in the lab to run it as a Baseline/Compliance Item... but one of the problems with it is that although running as 'SYSTEM', it wants to look at whatever user is currenly logged in.  As a Baseline, it won't 'wait for a user to logon' to run.  So depending upon when it runs, it might run and make an empty custom class with nothing to say, simply because the user is currently not logged on--even though they are logged on 8 hours a day, it just happened to run within the other 16 hours of that day.

So you, Super CM Admin that you are, you might want to forget about doing this as a baseline.  Instead make the powershell script as the only thing in the source folder for a package.  And then, make a old school/traditional Package, and program.  the program would run the script, "only when a user is logged on", but "with system rights".  and deploy the program to a collection.  If it were me... I'd set the advertisement to run on a schedule, like every 4 days or something.  Note I didn't test this at all in my lab.  I'm just offering this out there into the ether for (hopefully) someone else to take this and make it awesome and bulletproof. 

What the script does is create, and populate, a custom class. 

In the --> attached <-- is also a mof file.  You'd want to go to your console, Administration, Client Settings, Default Cient Settings, Hardware Inventory, set classes, and Import that mof file.  Once that is done, clients will be able to start reporting back on this information.

 

Viewing all 45 articles
Browse latest View live