Oh, how true. While React seems conceptually simple, it deals with a state model that requires us to hold multiple state machines in our head. It is all too easy to put state in various places and really screw it up — and then you get to figure out why.
In creating a story app for my kids — I ran into an interesting issue.
It worked just fine through out a single story — but I wasn’t able to reset the story through the parent component.
A long time ago, in a galaxy far, far away… Or so it seemed, a work friend of my Dad’s gave me a strange gift for Christmas. I’m going to date myself now — but it was a circa 1980’s Electronic Project Kit. I had many, many hours of building with that Kit — really enjoyed it.
You can see the way it was connected — you would run a wire from a little spring to another component, all over the board. …
Every good app needs a good navigator. Just like a cross-country trip, a good navigator takes us from place to place in our app, ensuring that we get to the place we want to go. A bad navigator takes us… well, where the navigator wants to go. We get frustrated and put the app down, lowering user engagement.
I’m sure all of us remember the early days of Android where the use of the back button might take you to… the previous screen, a navigation menu, the previous app, or really anywhere! …
A few hours ago, I was working on a piece of React Native code that required dynamic image exports off the file system. Of course, I started with the React Native Image document: https://reactnative.dev/docs/image
React Native has some great docs, so after reading this, I figured it would be a piece of cake.
Specifically, what I wanted to do was have a JSON file with data, and have a list of images in that data with information about them. I didn’t know how many images, or how much data ahead of time.
After perusing the React Native docs, I realized that there are several ways to include images, all of them pretty self explanatory…. but none of them would really let me read images directly from JSON on a local file system without some extra work. …
So, my daughter *loves* Choose Your Own Adventures, and everything like it. The one thing that has driven her nuts though — she cannot truly customize them. Instead, she loves the stories I tell — she can participate and customize them, to her heart’s content.
For instance, when she gets to paint a robot, she wants to specify the color. When she names her pet, she wants to name it a different name than the story specifies. She gets pretty irritated — but I don’t want to call it “Gus”!
So, I got to thinking, how could we do this in a sort of interactive storytelling way on a tablet? …
What do you think of the React Native debugger, they asked?
Yeah. I blanked. Entirely.
Looking back, it wasn’t my finest fifteen minutes for something so basic. I’m sure they expected something like Oh, I love Visual Studio Code, it’s awesome at single step, etc. Or maybe, you know, the layout inspector could be better. I blanked. Just everything… gone. I mean, there’s like 5000 ways to debug React Native, and I couldn’t figure a single one. Not even the ones I’d been using for a while.
Needless to say, I did not get the job, but I did get a fantastic new interview question for when I interview others. (Side Note — the interviewer was pretty chill about it and just moved on. Very appreciative of their professionalism.) …
So, you’ve trained your amazing new AI Neural Net. It correctly picks stock prices, and ensures that you can make millions of dollars.
But how do you actually use this thing?
To use a trained neural net model, we call this an inference. Every run — which is data applied to the input of the model, and an output of the model, is 1 inference. When we have a trained neural net model, we don’t generally update the neuron weights during an inference. (Note — model inference is also known as prediction, serving, and model evaluation.)
A model can be composed of one or more neural nets. This is where it gets a little tricky — one or more of those neural nets might be updating its weights to learn more. You never know. …
This gives you the flexibility to work with many different types of machines. Combined with compute targets and driver scripts to point to different machines, you can send jobs to the right machine for the job.
It also gives you the ability to run up the cost really, really quickly. :)
See https://azure.microsoft.com/en-us/pricing/details/machine-learning/ for a list of the current shapes and costs. The most expensive of these is about $3\hour. That really doesn’t sound like much. If you’re used to playing with small data sets, like MNIST, it’s not. To train a MNIST data set to near 100% accuracy using well known techniques, would cost $.24 …
Okay — our initial result in part 1 of this article wasn’t great. Honestly, more training, etc could probably be used. But, our point here is not just to train the heck of out of this thing — but to move it to Azure.
This is a continuation of https://allangraves.medium.com/gans-on-the-azure-ml-sea-part-1-e3af65061900.
So, let’s get started.
For this, we’ll need a driver script, and a new training script. We could easily use our old training script, but since these files are used for learning, I’d like to separate out the new pieces we’ll add.
The last thing we want to do is upload our data set, and prevent Azure from cloning it from our local dir. …
Wouldn’t it be cool if you could have your work done for you by a machine? Put some parameters in, then walk away, come back, and voila — fully formed homework!
For the low sum of just $19.99\month, you can! If you click the link below, you can sign up for the monthly work-a-holic plan! All you need to do is manually tune your data, your net, and find a way to run things, and boom. Everything else will be taken care of automatically.
In every neural net, there are 3 things that are necessary to properly train an inference…