<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2171605496452306&amp;ev=PageView&amp;noscript=1">

Dynamics 365 Business Central: AI - Custom Vision


Dynamics 365 Business Central: AI - Custom Vision

Hey everyone! I'm going to keep going on costing, next video. 

This video I wanted to interject with something called Custom Vision.

It's an artificial intelligence application that can be useful for NAV. 

Basically, everything is in Azure cloud. Business Central is sitting in the Azure cloud, in the circle, Azure cloud, we have something called Cortana intelligence. Call it AI. It has a lot of AI features. When you actually put an item into Business Central, such as item picture, it reaches out to the AI and it tries to recognize what you're putting in there.

I'm going to put it orange in and I upload a picture of orange and they will find out that it's a citrus, that it's an orange, things like that. The problem with it is, that attributes that you get out of the artificial intelligence about this item picture, you don't set them.

You have no control over those attributes.

But there's a way to control that and that's something called Custom Vision.

You can train your own model in Custom Vision using pictures.

I'm going to do good oranges and bad oranges or spoiled or not spoiled. Then test by actually uploading a good orange and asking the question “Is this an unspoiled orange?”

Then I'm only using two tags, like a yes or no Boolean. I could then possibly be feeding that into Business Central because it's in the Azure cloud. That's the idea behind that.

Let's see how that works, it's pretty neat.

Let's take a look at how this works. If I am in Business Central and I go to an item and I want to create a new item. Create new and it's actually going to be just item with no sales tax. It's an orange, but I'm not going to give Business Central any hint.

I'm just going to call it test and go ahead and import a picture here. I pick the picture here, and this is actually from focus, I'm just going to pick the test orange.

The picture comes up and it actually recognizes that it's indoor. That it's an orange, its citrus, and it's fruit. I can go ahead and change my description here to be orange, which is a much better description, and hit OK, and the orange comes in.

It used Microsoft's Cortana recognition software to recognize the orange, and attributes about the orange. The problem is that we would like to be able to set up the attributes ourselves about this orange.

We might not care that it's a citrus. We not might not care that the color is orange. We would like to know that it's just an orange, etcetera.

For example, if I uploaded a banana it might not know that it was a banana. It might just tell me it was yellow.

How can we set up our own attributes? There is a thing called custom vision here and I'm going to show you how that works. This is inside Microsoft Azure now, and it's part of the Microsoft stack.

I'm going to create a test project called TestOrange. I did this at a session during NAVUG Focus, so if you were there then you probably recognized this.

I'm just going to create a new project and I'm going to go ahead and add images. I have unspoiled oranges. I got a bunch of them and I'm just going to open them up, so these are really pretty looking oranges.

I'm going to add a tag. I could actually add a tag for myself, so something that I create. I call it not spoiled and upload it.

Now they're here and they all have the tag “not spoiled”. These are fresh oranges. I'm also going to add images that are spoiled, so these are really bad looking oranges, and upload them. 

I am going to tag them. I select all and I'm going to add a tag, and we're going to call them “spoiled”.

Now I have, in my workspace, 31 images. We have “not spoiled”, which are these, and we’ve got “spoiled”, which are these. They’re terribly spoiled.

What I'm going to do now is, train the model. I want the model to understand an orange whether it's spoiled or not spoiled.

Now it's trained.

It has a 100% precision which is really good.

Now what we're going to do is actually test it. If I pick a picture here, I'm going to pick the TestOrange it comes up “not spoiled”, with a 100% probability.

So now I have actually created a way to tag with my own tags. They’re not tags that Microsoft assigns or What comes out of the Microsoft engine.

I set up the tags for the picture, which is a big difference.

I can go ahead and test another one, “The Annoying Orange” is 100% spoiled! Don't know if you guys are familiar with that one.  

This is a really interesting exercise where you can create your own tags, and therefore if you hook that up to Business Central, you can actually have Custom Vision recognize the picture, put your tag on it and put that into the system.

Get the Latest Video Tutorials in your inbox:

More Videos: