Tag Archives: Azure Monitor

Uncovering Anomalies in Time-series Data with Kusto Query Language (KQL)

Anomaly detection is a crucial task in monitoring the performance of various systems. In this blog post, we will discuss how to use Kusto Query Language (KQL) to detect anomalies in CPU performance data.

Spikes

One of the most common types of anomalies is spikes in the data. Spikes occur when the data deviates significantly from its normal behavior. To detect spikes in CPU usage over time, we can use the following KQL query:

let window = 24h;
Perf
| where ObjectName == "Processor" and CounterName == "% Processor Time"
| where TimeGenerated > ago(window)
| summarize avg(CounterValue),stdev(CounterValue) by bin(TimeGenerated, 2h), Computer
| where (avg_CounterValue - avg_CounterValue) > 3 * stdev_CounterValue

This query first filters the data to include only CPU usage data and only the data that is within the last 24 hours. It then groups the data by time window and computer, calculates the average and standard deviation of the data, and finally filters out any data points that are more than 3 standard deviations away from the average.

Outliers

Another type of anomaly is outliers. Outliers are data points that are significantly different from the rest of the data. To detect outliers in CPU usage across different machines, we can use the following KQL query:

Perf
| where ObjectName == "Processor" and CounterName == "% Processor Time"
| summarize percentile(CounterValue,75) by Computer
| where percentile_CounterValue_75 > 50

This query filters the data to include only CPU usage data, calculates the 75th percentile of the data for each computer, then filters the results and only show the computers that have 75th percentile values higher than 50.

Changes over time

Finally, another type of anomaly is changes in the data over time. To detect changes in CPU usage over time, we can use the following KQL query:

let window = 7d;
Perf
| where ObjectName == "Processor" and CounterName == "% Processor Time"
| where TimeGenerated > ago(window)
| summarize avg(CounterValue) by Computer, TimeGenerated = startofday(TimeGenerated)
| join (
    Perf
    | where ObjectName == "Processor" and CounterName == "% Processor Time"
    | where TimeGenerated > ago(window)
    | summarize arg_min(TimeGenerated, CounterValue) by Computer, TimeGenerated = startofday(TimeGenerated)
    | where TimeGenerated < TimeGenerated
    | project Computer, TimeGenerated, CounterValue
) on Computer, TimeGenerated
| extend diff = avg_CounterValue - CounterValue
| where abs(diff) > 10

This query filters the data to include only CPU usage data and only the data that is within the last 7 days. It then groups the data by day and computer, calculates the average of the data, and finds the difference between consecutive days’ averages. The query finally filters out any data points where the difference is greater than 10.

Summary

In this blog post, we have discussed how to use KQL to detect different types of anomalies in CPU performance data. These queries can be customized and adjusted to fit the specific needs of your system and can be a valuable tool in monitoring and maintaining the performance of your systems. Anomaly detection can be complex but is also very powerful.

Loading

Kusto Detective Agency: Challenge 5 – Big heist

Challenges

The ADX team upped their game once again. Time for a proper forensic investigation, track down the baddies, find clues and decipher their meaning all while racing against the clock. Can you come up with the date and location of the heist in time to stop them?

General advice

This challenge requires a bit of creative thinking, even with the hints there are multiple paths to go down and not all of them are going to lead to the right outcome. the key to this one, keep it simple and logical.

Challenge 5: Big heist

This challenge also has multiple parts, first we need to identify four chatroom users from over three million records and then we need to “hack” their IPs to get more clues.

Query Hint Part 1

Trying to identify the right user behaviors here is super tricky, I got tripped up here by adding a level of complexity that was unnecessary. At its simplest we would have to find a room where only 4 people joined and no one else. Some KQL commands that will be useful here are tostring, split, extend, row_cumsum

Big heist challenge text - Part 1

Hello. It’s going to happen soon: a big heist. You can stop it if you are quick enough. Find the exact place and time it’s going to happen.
Do it right, and you will be rewarded, do it wrong, and you will miss your chance.

Here are some pieces of the information:
The heist team has 4 members. They are very careful, hide well with minimal interaction with the external world. Yet, they use public chat-server for their syncs. The data below was captured from the chat-server: it doesn’t include messages, but still it may be useful. See what you can do to find the IPs the gang uses to communicate.
Once you have their IPs, use my small utility to sneak into their machine’s and find more hints:
https://sneakinto.z13.web.core.windows.net/<ip>

Cheers
El Puente

PS:
Feeling uncomfortable and wondering about an elephant in the room: why would I help you?
Nothing escapes you, ha?
Let’s put it this way: we live in a circus full of competition. I can use some of your help, and nothing breaks if you use mine… You see, everything is about symbiosis.
Anyway, what do you have to lose? Look on an illustrated past, fast forward N days and realize the future is here.

Query challenge 5 - Part 1

let rooms =
ChatLogs
| where Message contains “joined”
| extend user = tostring(split(Message,” “,1))
| extend chan = tostring(split(Message,” “,5))
| distinct user, chan
| summarize count() by chan
| where count_ == 4
| project chan;
let chatroom =
ChatLogs
| extend action = tostring(split(Message,” “,2))
| where action contains “joined” or action contains “left”
| extend A1 = iif(action contains “joined”, 1, -1)
| extend user = tostring(split(Message,” “,1))
| extend chan = tostring(split(Message,” “,5))
| where chan in (rooms)
| order by Timestamp asc
| extend total=row_cumsum(A1, chan != prev(chan))
| where total ==4
| distinct chan;
let users =
ChatLogs
| extend chan = tostring(split(Message,” “,5))
| where chan in (chatroom)
| extend user = tostring(split(Message,” “,1))
| distinct user;
ChatLogs
| extend user = tostring(split(Message,” “,1))
| where user in (users)
| where Message contains “logged”
| extend IP = tostring(split(Message,” “,5))
| distinct IP

Alright we’ve got some IPs, so time to “hack”, using the provided tool you’ll gather a set of clues from each of the gang members, there are a few key things you need to find, these are an email, some pictures, a cypher tool, an article and a pdf copy of it and of course a video from the nefarious professor Smoke.

From here on out it’s all investigative skills, you now have everything you need to find the date and location of the heist and save that datacenter!

Final hint

In order to decrypt the secret message, you’re going to need a special key, the format looks familar right? Spot on you’ll need the answer from challenge 4!

Congratulations Detective!

If you’ve found this blog series useful, please let me know via LinkedIn or drop a comment below. These challenges have been super fun and I for one am looking forward to season 2!

Loading

Kusto Detective Agency: Challenge 4 – Ready to play?

Challenges

Just when you thought these challenges couldn’t get any cooler along comes your very own nemesis and a multi-part puzzle taking you on a street tour of New York City.

General advice

First, we need to import the data ourselves this time around, using Ingest from Blob under our data blade, you can also change the column name I used “Primes”
Calculating the prime numbers can be a little tricky as our free ADX cluster requires us to be clever with our query in order to allow it to complete, luckily, we get a free lesson on “special primes”

Challenge 4: Ready to play?

This challenge has two parts and we’ll look at them in turn, first we need to identify a specific prime number and then use that to get the second clue and then we have to find a specific area in New York City,

Query Hint Part 1
Calculating the largest special prime under 100M can be done in a variety of ways, the trick is working within the limited capacity of our free ADX cluster. KQL commands that are useful are serialize, prev, next and join
Ready to play? challenge text - Part 1


Hello. I have been watching you, and I am pretty impressed with your abilities of hacking and cracking little crimes.
Want to play big? Here is a prime puzzle for you. Find what it means and prove yourself worthy.

20INznpGzmkmK2NlZ0JILtO4OoYhOoYUB0OrOoTl5mJ3KgXrB0[8LTSSXUYhzUY8vmkyKUYevUYrDgYNK07yaf7soC3kKgMlOtHkLt[kZEclBtkyOoYwvtJGK2YevUY[v65iLtkeLEOhvtNlBtpizoY[v65yLdOkLEOhvtNlDn5lB07lOtJIDmllzmJ4vf7soCpiLdYIK0[eK27soleqO6keDpYp2CeH5d\F\fN6aQT6aQL[aQcUaQc[aQ57aQ5[aQDG

Start by grabbing Prime Numbers from
https://kustodetectiveagency.blob.core.windows.net/prime-numbers/prime-numbers.csv.gz and educate yourself on Special Prime numbers (https://www.geeksforgeeks.org/special-prime-numbers), this should get you to
https://aka.ms/{Largest special prime under 100M}

Once you get this done – you will get the next hint.

Cheers,
El Puente.

Query challenge 4 - Part 1

//Method 1 – This query will calculate the largest prime under 100M using the Sieve of Eratosthenes method to test each prime

Challenge4
| serialize
| order by Primes asc
| extend prevA = prev(Primes,1)
| extend NextA = next(prevA,1)
| extend test =  prevA + NextA + 1
| where test % 2 != 0 // skip even numbers
| extend divider = range(3, test/2, 2) // divider candidates
| mv-apply divider to typeof(long) on
(
  summarize Dividers=countif(test % divider == 0) // count dividers
)
| where Dividers == 0 // prime numbers don’t have dividers
| where test < 100000000 and test > 99999000
| top 1 by test

//Method 2 – This query will calculate the largest prime under 100M by comparing special primes to the data set of all prime numbers

Challenge4
| serialize
| project specialPrime = prev(Primes) + Primes + 1
| join kind=inner (Challenge4) on $left.specialPrime == $right.Primes
| where specialPrime < 100000000
| top 1 by Primes desc



Now that we have our prime number we can move on to part 2
Largest special prime under 100m

The number we want is 99999517 so we can now head over to http://aka.ms/99999517

A-ha a message from our nemesis and we need to meet them in a specific area marked by certain types of trees!

Ready to play? challenge text - Part 2

Well done, my friend.
It's time to meet. Let's go for a virtual sTREEt tour...
Across the Big Apple city, there is a special place with Turkish Hazelnut and four Schubert Chokecherries within 66-meters radius area.
Go 'out' and look for me there, near the smallest American Linden tree (within the same area).
Find me and the bottom line: my key message to you.

Cheers,
El Puente.

PS: You know what to do with the following:

----------------------------------------------------------------------------------------------

.execute database script <|
// The data below is from https://data.cityofnewyork.us/Environment/2015-Street-Tree-Census-Tree-Data/uvpi-gqnh 
// The size of the tree can be derived using 'tree_dbh' (tree diameter) column.
.create-merge table nyc_trees 
       (tree_id:int, block_id:int, created_at:datetime, tree_dbh:int, stump_diam:int, 
curb_loc:string, status:string, health:string, spc_latin:string, spc_common:string, steward:string,
guards:string, sidewalk:string, user_type:string, problems:string, root_stone:string, root_grate:string,
root_other:string, trunk_wire:string, trnk_light:string, trnk_other:string, brch_light:string, brch_shoe:string,
brch_other:string, address:string, postcode:int, zip_city:string, community_board:int, borocode:int, borough:string,
cncldist:int, st_assem:int, st_senate:int, nta:string, nta_name:string, boro_ct:string, ['state']:string,
latitude:real, longitude:real, x_sp:real, y_sp:real, council_district:int, census_tract:int, ['bin']:int, bbl:long)
with (docstring = "2015 NYC Tree Census")
.ingest async into table nyc_trees ('https://kustodetectiveagency.blob.core.windows.net/el-puente/1.csv.gz')
.ingest async into table nyc_trees ('https://kustodetectiveagency.blob.core.windows.net/el-puente/2.csv.gz')
.ingest async into table nyc_trees ('https://kustodetectiveagency.blob.core.windows.net/el-puente/3.csv.gz')
// Get a virtual tour link with Latitude/Longitude coordinates
.create-or-alter function with (docstring = "Virtual tour starts here", skipvalidation = "true") VirtualTourLink(lat:real, lon:real) { 
	print Link=strcat('https://www.google.com/maps/@', lat, ',', lon, ',4a,75y,32.0h,79.0t/data=!3m7!1e1!3m5!1s-1P!2e0!5s20191101T000000!7i16384!8i8192')
}
// Decrypt message helper function. Usage: print Message=Decrypt(message, key)
.create-or-alter function with 
  (docstring = "Use this function to decrypt messages")
  Decrypt(_message:string, _key:string) { 
    let S = (_key:string) {let r = array_concat(range(48, 57, 1), range(65, 92, 1), range(97, 122, 1)); 
    toscalar(print l=r, key=to_utf8(hash_sha256(_key)) | mv-expand l to typeof(int), key to typeof(int) | order by key asc | summarize make_string(make_list(l)))};
    let cypher1 = S(tolower(_key)); let cypher2 = S(toupper(_key)); coalesce(base64_decode_tostring(translate(cypher1, cypher2, _message)), "Failure: wrong key")
}

Using the census data, we now need to figure out the location in the clue, luckily, it’s only a KQL query away

Query Hint - Part 2
Getting the right size area can be tricky, a KQL command that will be extremely helpful will be geo_point_to_h3cell

Query challenge 4 - Part 2

//This query will filter a specific size area until it makes the set of trees given in the clue

let locations =
nyc_trees
| extend h3cell = geo_point_to_h3cell(longitude, latitude, 10)
| where spc_common == “‘Schubert’ chokecherry”
| summarize count() by h3cell, spc_common
| where count_ == 4
| summarize mylist = make_list(h3cell);
let final =
nyc_trees
| extend h3cell = geo_point_to_h3cell(longitude, latitude, 10)
| where h3cell in (locations)
|where spc_common ==  “Turkish hazelnut” or spc_common == “American linden”
| summarize count() by h3cell, spc_common
| where spc_common == “Turkish hazelnut” and count_ ==1
| project h3cell;
nyc_trees
| extend h3cell = geo_point_to_h3cell(longitude, latitude, 10)
| where h3cell in (final)
| where spc_common == “American linden”
| top 1 by tree_dbh asc
| project latitude, longitude
| extend TourLink = strcat(‘https://www.google.com/maps/@’, latitude, ‘,’, longitude, ‘,4a,75y,32.0h,79.0t/data=!3m7!1e1!3m5!1s-1P!2e0!5s20191101T000000!7i16384!8i8192’)


Now that we have a location, we’re not done yet and here’s where the fun really starts, using our generated link will take us on a “Tour of the City” and give us a google maps street view link. Have a look around for our mysterious “El Puente” you may need to walk around a little bit.

Now that we’ve found the message, there’s one more thing we need to do and that’s to use the decrypt function to figure out the message from out detective portal, this part was a little tricky and took a few tries to get the right string to use.

Decryption Key

Using the mural the phrase we are looking for is “ASHES to ASHES”

There we have it, another secret message! Keep a hold of this answer as you’ll need it to complete the final challenge.

Well done Detective, we’ve been on quite the journey. See you in the next challenge!


Loading

Kusto Detective Agency: Challenge 3 – Bank robbery!

Challenges

I must admit that the difficulty spike on the challenges is both refreshing and surprising. The level of care that went into crafting each of these scenarios is outstanding and the ADX team have certainly outdone themselves, if you like these cases as much as I do you can let the team know at kustodetectives@microsoft.com

General advice

Again, this case requires some pretty heavy assumptions to solve, some of which the hints will give you clarity on. It’s very easy when trying to solve the bank robbery to end up with a very overcomplicated solution that may take you in the wrong direction, try keep this one simple.

Challenge 3: Bank robbery!

For this challenge you need to track down the hideout of a trio of bank robbers, it seems simple, you have the address of the bank and are provided with all the traffic data for the area now it’s just a case of figuring out where the robbers drove off to.

Query Hint
The trick with this challenge is you need to be able to create a set of vehicles that weren’t moving during the robbery, of course the catch is that only moving vehicles have records in the traffic data. KQL commands that will be useful for this challenge are join, remember that there are different kinds of joins and arg_max

Bonus cool tip

Thanks to my colleague Rogerio Barros for showing me this one because it is awesome! Due to the nature of the traffic data, it is actually possible to plot the route of any number of cars using | render scatterchart. Below is a visual representation of three random cars as they move about Digitown, this is quite interesting once you have identified the three suspects.

Bank robbery challenge text

We have a situation, rookie.
As you may have heard from the news, there was a bank robbery earlier today.
In short: the good old downtown bank located at 157th Ave / 148th Street has been robbed.
The police were too late to arrive and missed the gang, and now they have turned to us to help locating the gang.
No doubt the service we provided to the mayor Mrs. Gaia Budskott in past – helped landing this case on our table now.

Here is a precise order of events:

  • 08:17AM: A gang of three armed men enter a bank located at 157th Ave / 148th Street and start collecting the money from the clerks.
  • 08:31AM: After collecting a decent loot (est. 1,000,000$ in cash), they pack up and get out.
  • 08:40AM: Police arrives at the crime scene, just to find out that it is too late, and the gang is not near the bank. The city is sealed – all vehicles are checked, robbers can’t escape. Witnesses tell about a group of three men splitting into three different cars and driving away.
  • 11:10AM: After 2.5 hours of unsuccessful attempts to look around, the police decide to turn to us, so we can help in finding where the gang is hiding.

Police gave us a data set of cameras recordings of all vehicles and their movements from 08:00AM till 11:00AM. Find it below.

Let’s cut to the chase. It’s up to you to locate gang’s hiding place!
Don’t let us down!

Query challenge 3

//This query will calculate a set of cars not moving during the robbery, which then started moving after it occurred and track vehicles heading to the same address

let Cars =
Traffic
| where Street == 148 and Ave == 157
| where Timestamp > datetime(2022-10-16T08:31:00Z) and Timestamp < datetime(2022-10-16T08:40:00Z) | join kind=leftanti ( Traffic | where Timestamp >= datetime(2022-10-16T08:17:00Z) and Timestamp <= datetime(2022-10-16T08:31:00Z)
) on VIN
| summarize mylist = make_list(VIN);
Traffic
| where VIN in (Cars)
| summarize arg_max(Timestamp, *) by VIN
| summarize count(VIN) by Street, Ave
| where count_VIN == 3



Now just wait for the police to swoop in and recovery the stolen cash, another job well done detective!

Loading

Kusto Detective Agency: Challenge 2 – Election fraud in Digitown!

Challenges

These challenges are a fantastic hackathon approach to learning KQL, every week poses a new and unique approach to different KQL commands and as the weeks progress, I’ve learned some interesting tricks. Let’s take a look at challenge 2.

General advice

I’ve mentioned previously that there are hints that can be accessed from the detective UI, from this challenge onwards the hints provide critical information and without them there are assumptions you need to make, which if incorrect will throw you off the correct solution.

This is also the first challenge that has multiple mays to get to the answer, in this post i will be discussing the more interesting one.

Challenge 2: Election fraud?

The second challenge ramps up the difficulty, you’ve been asked to verify the results of the recent election for the town mascot.

Query Hint
In order to solve challenge, you need to be figure out if any of the votes are invalid and if any are, removed them from the results.
KQL commands that will be helpful are anomaly detection, particularly series_decompose_anomalies and bin, alternatively you can also make use of format_datetime and a little bit of guesswork
Election Fraud challenge text

Query challenge 2

//This query will analyze the votes for the problem candidate and look for anomalies, if any are found they will be removed from the final count give the correct results for the election!

let compromisedProxies = Votes
| where vote == “Poppy”
| summarize Count = count() by bin(Timestamp, 1h), via_ip
| summarize votesPoppy = make_list(Count), Timestamp = make_list(Timestamp) by via_ip
| extend outliers = series_decompose_anomalies(votesPoppy)
| mv-expand Timestamp, votesPoppy, outliers
| where outliers == 1
| distinct via_ip;
Votes
| where not(via_ip in (compromisedProxies) and vote == “Poppy”)
| summarize Count=count() by vote
| as hint.materialized=true T
| extend Total = toscalar(T | summarize sum(Count))
| project vote, Percentage = round(Count*100.0 / Total, 1), Count
| order by Count



Digitown can sleep easy knowing that they have their correct town mascot due to your efforts! Stay tuned for some excitement in challenge 3.

Loading

Kusto Detective Agency: Hints and my experience

Challenges

So, what is the Kusto Detective Agency?

This set of challenges is an amazing, gamified way to learn the Kusto Query Language (KQL), which is the language used by several Azure services including Azure Monitor, Sentinel, M365 Defender and Azure Data Explorer (ADX) to name a few. Using your skills, you will help the citizens of Digitown solve mysteries and crimes to make the city a better place!

How do I get started?

The challenges are available here https://detective.kusto.io/, follow a few basic steps to get started by creating an ADX cluster here https://aka.ms/kustofree and copy the Cluster URI you need this as a part of the onboarding answer.

Now answer a simple question using KQL that being to calculate the sum of the “Score” column

If you are just getting started learning KQL check out Rod Trents ‘Must Learn KQL’ series!

https://aka.ms/MustLearnKQL https://azurecloudai.blog/2021/11/17/must-learn-kql-part-1-tools-and-resources/

as well as these cool resources

Watch this basic explainer on how the query language works: http://aka.ms/StartKqlVideo
Check out the documentation here: Kusto Query Language (KQL) overview- Azure Data Explorer | Microsoft Docs

For help with the first query click the spoiler tag below

Onboarding Query

Onboarding //This is the name of the table we will be running our query against
| summarize sum(Score) //the sum command will add up all the numbers in the “Score” column

General advice

Each challenge has up to three hints that can be accessed through the hints section of your Detective UI, the hints are quite useful, and I would recommend using them if you get stuck especially as some of them include information which is important to confirm assumptions. There are also different ways to get to the answers which shows the power of creative thinking.

Challenge 1: The rarest book is missing!

The first challenge is quite interesting you are tasked with finding a rare missing book. Most people I’ve spoken to have figured out the method but get stuck on the KQL query I’ve included an extra hint below to get you started.

Query Hint
In order to solve this you’ll need to work with the weights of the books on the selves.
KQL commands that will be helpful are sum() and join
The rarest book is missing challenge1 text

This was supposed to be a great day for Digitown’s National Library Museum and all of Digitown.
The museum has just finished scanning more than 325,000 rare books, so that history lovers around the world can experience the ancient culture and knowledge of the Digitown Explorers.
The great book exhibition was about to re-open, when the museum director noticed that he can’t locate the rarest book in the world:
“De Revolutionibus Magnis Data”, published 1613, by Gustav Kustov.
The mayor of the Digitown herself, Mrs. Gaia Budskott – has called on our agency to help find the missing artifact.

Luckily, everything is digital in the Digitown library:

  • – Each book has its parameters recorded: number of pages, weight.
  • – Each book has RFID sticker attached (RFID: radio-transmitter with ID).
  • – Each shelve in the Museum sends data: what RFIDs appear on the shelve and also measures actual total weight of books on the shelve.

Unfortunately, the RFID of the “De Revolutionibus Magnis Data” was found on the museum floor – detached and lonely.
Perhaps, you will be able to locate the book on one of the museum shelves and save the day?

Query challenge 1

//This query will calculate the weight of the books on each shelf and compare that to the weight registered by the sensor, find the shelf with extra weight and we’ll find our book!
Shelves
| mv-expand rf_ids
| extend RID = tostring(rf_ids)
| join (Books) on $left.RID == $right.rf_id
| summarize sum(weight_gram) by shelf, total_weight
| extend diff = total_weight – sum_weight_gram
| order by diff


I will be talking about the rest of the challenges in a later series so be sure to check back soon, in the meantime good luck Detective!

Loading

Azure Monitor: VM insights now supports Azure Monitor agent

It’s been on my wish list for a while, but it looks like the Azure Monitor has a present for us. You can now enable VM Insights using Azure Monitor Agent (AMA). Note: This is public preview.

With this release these are the key features:

  • Easy configuration using data collection rules(DCR) to collect VM performance counters and specific data types.
  • Option to enable/disable processes and dependencies data that provides Map view, thus, optimizing costs.
  • Enhanced security and performance that comes with using Azure Monitor agent and managed identity.

For those not familiar with VM Insights here is a fantastic overview but in short VM Insights gives a standardized way of measuring and managing the performance and health of your virtual machines and virtual machine scale sets. This includes running processes and dependencies on other resource

Changes for Azure Monitor agent

Be aware of the following differences when using AMA for VM Insights

Workspace configuration. VM insights no longer needs to be enabled on the Log Analytics workspace

Data collection rule. Azure Monitor agent uses data collection rules (DCR) to configure its data collection. VM insights creates a data collection rule that is automatically deployed if you enable your machine using the Azure portal.

Agent deployment. There are minor changes to the process for onboarding virtual machines to VM insights in the Azure portal. You must now select which agent you want to use, and you must select a data collection rule for Azure Monitor agent.

How do I configure VM Insights with AMA?

This is a fairly easy process, just note the dependency on data collection rules. Here is the official documentation

Enjoy and happy monitoring!

Loading

Azure Monitor Basic Logs

What are Basic Logs?

Relatively new and still in preview Basic Logs are a way to save costs when working with high-volume logs typically associated with debugging, troubleshooting and auditing but they should not be used where analytics and alerts are important.

How do I configure one?

Firstly, it is important to note that tables created with the Data Collector API do not support Basic Logs.
The following are supported:

  • Logs created via Data Collection Rules (DCR)
  • ContainerLogsv2 (Used by Container Insights)
  • Apptraces

All tables by default are set to analytics mode, in order to change this, navigate to Log Analytics Workspaces, select the workspace with the log you want to change. Choose Tables from the navigation rail, select the log and choose Manage Table from the right

Change the table plan to Basic. Note that the default retention changes from 30 days to 8 days. This can of course also be done through the API or CLI


How can I query Basic Logs?

Log queries against Basic Logs are optimized for simple data retrieval using a subset of KQL language, including the following operators:

There are also some other limitations such as time range cannot be specified in the query, and purge is not available, for limitations refer to the official documentation

How much cheaper is it?

Basic Logs $0.615 per GB of data ingested
Standard Pay-as-you-go price $2.76 per GB (5GB free per month) with discounts for purchasing a commitment tier of up to 5000GB per day.

During preview there is no charge for querying basic logs however there will be a small charge once it reaches GA based on the amount of data the query scans, not just the amount of data the query returns. At this time the expected cost is $0.007 per GB of data scanned.

Basic Logs Architecture Reference

Loading

Azure Managed Grafana

Recently announced the preview for Azure Managed Grafana is now available. For those who maybe don’t know Grafana is an observability platform which lets you create mixed data dashboards form a variety of sources.

And now you can run it in Azure!

Lets get started

First you need to create a Grafana workspace, in the Azure Portal search for Azure Managed Grafana select it and click +Create. Fill out all the usual suspects, choosing your subscription, resource group, location and workspace name.

On the following tab create a managed identity as this is the way Grafana will be able to access data from your resources. Then create your workspace.

Next we need access

Grafana needs access to the resources you want to build dashboards for, you can easily do this with Azure RBAC and it can easily be done at the resource group or subscription level as well.

Using Access Control (IAM) give Monitoring reader access to the Managed Identify you created as part of your Grafana workspace

Lets make some dashboards

First lets open our Grafana, navigate to the workspace previously created and click on the endpoint address on the Overview blade

On the landing page there’s a notification to configure a data source. We’ll be using Azure Monitor, simply click add data source and choose Azure Monitor from among all of the available options, we can see here that there are plenty to choose from which is part of the power of Grafana.

Name your connection, choose managed identify and select the relevant subscription. Then click Save & test

A nice feature is the ability to access pre-built dashboards out of the box, clicking on the Dashboards tab shows us several options which we can import with the click of a button.

And we’re all set below is an example of the Azure Storage Insights dashboard which I was able to configure from start to finish in less than 5 minutes.

Overall, Azure Managed Grafana is very cool and offers an alternative approach to mixed data dashboards from a variety of sources. Of course, you can also create customized visuals and there are plenty of options to ensure you end up with something meaningful and perfect for your needs. I’m looking forward to seeing this go GA.

Happy dashboarding!

See the source image

Loading

Calling a Logic App from an Azure Monitor workbook!

Workbooks have a couple of new action types which let you do some very cool things. The one I’m going to focus on now is called ARM actions and this is some amazing stuff , if you thought workbooks were powerful before then watch this space!

Arm Actions

First ARM actions can be used to call various Azure actions against a resource. In the example workbook you can Start and Stop a website which is quite useful as you can do it directly from the workbook without having to navigate to the Resource Blade.

This uses a parameter to fetch the site name and pipes it into an ARM action of Start

Calling a Logic App

Super cool and very useful. Now lets look at how we can up our game a little bit. Using this same method you can actually call a Logic App, this is slightly more complex as you need to have the ARM Action path to said Logic App which looks like this:

/subscriptions/{Subscription:id}/resourceGroups/{RG}/providers/Microsoft.Logic/workflows/<LogicApp Name>/triggers/manual/run?api-version=2016-06-01

Note the various parameters and you can also parametrize the Logic App name, I have it hardcoded in this example. Also note in this case the trigger type is manual, this because the Logic App trigger is “When an HTTP request is received” and I am sending a JSON payload from the worbook to the Logic App.

You can also specify other triggers for Request, Recurrence and API Connection.

Now what can you do with this? Well as you might imagine the possibilities are endless, in my case I’m calling the Logic App to populate a secondary set of App data into Log Analytics to add more scope to the original workbook.

Once the Logic App has been run the App Info column changes to Populated and the GetAppDetails prompt changes to Refresh, the data is then made visible in a second grid below.

Conclusion

I’m very excited by the world that has opened up with this type of advanced workbook essentially turning them from an awesome visual tool into an awesome manageability tool.

If you ‘ve made use of this functionality I’d love to hear from you.

That’s all for now, happy workbooking!

Loading