Wednesday, September 23, 2020

Why Is K.I.S.S.ing So Hard?

 Keep It Simple Stupid!

I did get a chance to try my "improved", more 'accurate', less 'gamey' version of the rules and once again by the time the game was over, it was tedious, bordering on boring actually, and taking far too long for a quick solo game. (Please imagine a clever gif of me scrounching up a piece of paper and dunking it in a waste basket.) 

So I did what works best for me. I did other things for a while (including a 16thC game over Hangouts where I was rightfully trounced - sorry no pics but I expect the game will appear on the Sharp Brush blog.). 

Then, today, I came back with a fresh eye and an open mind.

Once more unto the Bridge! 
Turn 8ish of 15: the armies are all on board and well engaged.

The first step was to spend some time with my nose in books. Then, I let my subconscious mind guide me as I poked at the figures and started to think of other mechanisms, "the look of the thing", what needs to be shown and what doesn't and about the sorts of decisions I want to be making as a player.

Casualties mount. The Bodyguards charge into the battered grey infantry!


The next thing was to again regroup the figures into units of 8 infantry or 4 other figures which is how they are painted. I then dumped the existing command control and activation rules, the fiddlyier bits, the existing morale rules, the mutiphase charge resolution and the proposed reintroduction of pinned and rally rules.

...and they pursue, sweeping away the fleeing rebels. Then, once the refugees were clear..... the two Rebel batteries opened fire on them at close range! The Guards quickly retreated to lick their wounds.

I then scribbled some note outlining the new simple game, tweaked it once or twice for things that arose mid game, and played an engaging, very close, occasionally nail biting, rematch of the same OHW scenario in roughly an hour. 

The details are more abstracted but then so are the shiny toys and the things I had to think about as player seemed to me more like things a General should be thinking about.

Turn 13/15. The Hochelaga Fusiliers are the last fresh Dominion unit. "Fix Bayonets" "CHARGE!!" and the last remnants  of shaken rebel units flee over the bridge. Another incursion has been repulsed.

So, that's one happy test game. The rules have been amended to match and the link posted on my Rules blog page. 

I think its time to do some casting and painting and the like, and then try it again with a bigger scenario and more men! 










Tuesday, September 22, 2020

Human Fall Flat Thermal Free Download

Step in to the holiday season with Human: Fall Flat and work together to unlock new skins for all players! Place the presents on the conveyor belt, and hit the top of The Totalizer to unveil the biggest present of all!

Create a lobby with your friends and watch Bob fall, flail, wobble and stumble together. Need a hand getting that boulder on to a catapult, or need someone to jump on the end of your see-saw? Jump online today and make Bob's dreams come true!

Bob is just a normal human with no superpowers, but given the right tools he can do a lot. Misuse the tools and he can do even more!

The world of Human: Fall Flat features advanced physics and innovative controls that cater for a wide range of challenges. Bob's dreams of falling are riddled with puzzles to solve and distractions to experiment with for hilarious results. The worlds may be fantastical, but the laws of physics are very real.

Will you try to open that mysterious door, or would you rather see how far you can throw a speaker set out that window?

GAMEPLAY AND SCREENSHOTS :
DOWNLOAD GAME:
♢ Click or choose only one button below to download this game.
♢ View detailed instructions for downloading and installing the game here.
♢ Use 7-Zip to extract RAR, ZIP and ISO files. Install PowerISO to mount ISO files.

Human Fall Flat Thermal Free Download
http://pasted.co/af29b5ae

INSTRUCTIONS FOR THIS GAME
➤ Download the game by clicking on the button link provided above.
➤ Download the game on the host site and turn off your Antivirus or Windows Defender to avoid errors.
➤ Once the download has been finished or completed, locate or go to that file.
➤ To open .iso file, use PowerISO and run the setup as admin then install the game on your PC.
➤ Once the installation process is complete, run the game's exe as admin and you can now play the game.
➤ Congratulations! You can now play this game for free on your PC.
➤ Note: If you like this video game, please buy it and support the developers of this game.

SYSTEM REQUIREMENTS:
(Your PC must at least have the equivalent or higher specs in order to run this game.)


Minimum:
• OS: Windows XP/Vista/7/8/8.1/10 x86 and x64
• Processor: Intel Core2 Duo E6750 (2 * 2660) or equivalent | AMD Athlon 64 X2 Dual Core 6000+ (2 * 3000) or equivalent
• Memory: 1024 MB RAM
• Graphics: GeForce GT 740 (2048 MB) or equivalent | Radeon HD 5770 (1024 MB)
• Storage: 500 MB available space

Recommended:
• OS: Windows XP/Vista/7/8/8.1/10 x86 and x64
• Processor: Intel Core2 Quad Q9300 (4 * 2500) or equivalent | AMD A10-5800K APU (4*3800) or equivalent
• Memory: 2048 MB RAM
• Graphics: GeForce GTX 460 (1024 MB) or equivalent | Radeon HD 7770 (1024 MB)
• Storage: 500 MB available space
Supported Language: English, Italian, Spanish, Polish, Russian, Portuguese-Brazil, Simplified Chinese language are available.
If you have any questions or encountered broken links, please do not hesitate to comment below. :D

Sunday, September 13, 2020

Changing Wax By Jared Quan, Book Review

The world of Wax is divided between the dark and light. Each side has a single leader, a Lord of Darkness and a King of Light. Even for them, there is a stronger force. The Master Book of Magic governs all on Wax. The monks of Wax are the keepers and scholars of the book and according to the book they need a witch for the upcoming proceedings between the two rulers. What they didn't know was the witch would tell of a prophecy that will change Wax.

Changing Wax by Jared Quan is a middle grade fantasy. The publisher includes a statement that all material is suitable for children over the age of 12. I obtained a copy of Changing Waxfor review purposes.

Plot

Changing Wax is an epic quest adventure story. An unlikely trio of adventurers (a monk, a warrior, and an imp) come together through circumstances that at first seem random. But isn't that how prophecies usually work? The companions don't know they are part of a prophecy, but the leaders of the world do. The leaders decide they need to save themselves from the prophecy. To do that they have to eliminate the three adventures. They are sent on impossible quests.

The quests don't turn out as planned, but there is enough hope for the Lord of Darkness, King of Light and the Head Monk, for them to keep sending the remaining party members on to the next one.

The story comes to a climatic conclusion in a battle that doesn't just involve the armies, but everyone from the lands of light and dark.

Characters

The 3 main characters are good examples of the type of people who are part of the world of Wax.

Gorath is a monk who is known for not being as competent as others. Head Monk Towe was counting on the level of competence of Gorath when he sends him on a mission. Gorath, though, has learned from his past and is prepared for just about anything. He has stowed everything he can think of that might be useful on his assignment. Luckily for him, he has a bag that can hold it all, stored in another dimension so the bag doesn't weigh much.

Dallion Quimbie Haberdasher Nocks Drisbie Horlon Everton, also known as Drip, is an imp with a peculiarity of wanting to be a writer instead of a servant like all other imps. He is sharp and able to understand what he sees. Drip can do more than write about others, he can use the information to create viable plans.

Thomas Twostead is the twelfth child. She, yes she, was named Thomas because a prophetic wizard told her parents their twelfth child, Thomas, would do great things. Why let anything like gender stand in the way. Thomas has risen in the ranks of the Dark Scouts and is a good fighter. It helps that she has a magical shield and sword.

The characters in Changing Wax fit into the humorous fantasy style of the book. They are somewhat whimsical and have something about them that most people who are struggling to be a part of a world such as junior high school.

Style

Changing Wax is written in episodes. These aren't just a different name for chapters, the story was originally published episodically every week. After the completion of the story, the episodes were brought together to be published as a book. This episodic approach means each episode is roughly the same size and they usually end with a hook, or cliff hanger, to bring the reader back the following week. You can read more episodic tales from the publisher, Big World Network.

Jared Quan also wrote with a flare of humor that is reminiscent of the style of Terry Pratchett with his Discworld series (wiki page), but well targeted for a younger audience. There are points of interest regularly brought up providing a humorous view of the world of Wax, the people who live there, where they live, and the events they have to deal with.

Along with the humor, there are magical items that do all sorts of different things. Several are even personalities to help carry the story forward.

All if this comes together for wonderful middle grade fantasy tale.

Overall

Changing Wax is a fun read. I think many middle graders will enjoy it. There are characters for everyone to relate to. They are put in serious, yet ridiculous, situations and find a way to do what is right along with doing the right thing for that particular dilemma.

The episodic approach allows for short bursts of reading to fit into shorter amounts of available time, each episode is about six pages long. The humor is fitting for the age as with any of the other material covered. There is a war, but nothing graphic.

I recommend Changing Wax for readers who would like a whimsical fantasy.

You can find Changing Wax on Amazon (link).

About the Author (from the book)

Jared's childhood was split between Phoenix, AZ, and Snowflake, AZ. Jarod's sense of writing and adventure was established with the help of his fourth grade teacher. She introduced him to authors like C.S. Lewis and J.R.R. Tolkien. This was combined with the stories of his grandfather as a Seabee with the U.S. Navy during World War II, and the stories of roles the Quan (Guan) family played in the romancing of the three kingdoms in China (184 AD–280 AD). He possesses a great deal of love for history and war. Jared achieved a milestone with his book Ezekiel's Gun getting published in 2010, an adventure/spy novel. He has hopes to have several more books published in the future. He would like to travel the world and see places like China, Europe and South Korea.

I'm working at keeping my material free of subscription charges by supplementing costs by being an Amazon Associate and having advertising appear. I earn a fee when people make purchases of qualified products from Amazon when they enter the site from a link on Guild Master Gaming and when people click on an ad. If you do either, thank you.

If you have a comment, suggestion, or critique please leave a comment here or send an email to guildmastergaming@gmail.com.

I have articles being published by others and you can find most of them on Guild Master Gaming on Facebookand Twitter(@GuildMstrGmng).

People Behind The Meeples- Episode 239: Jon Vallerand

Welcome to People Behind the Meeples, a series of interviews with indie game designers.  Here you'll find out more than you ever wanted to know about the people who make the best games that you may or may not have heard of before.  If you'd like to be featured, head over to http://gjjgames.blogspot.com/p/game-designer-interview-questionnaire.html and fill out the questionnaire! You can find all the interviews here: People Behind the Meeples. Support me on Patreon!


Name:Jon Vallerand
Email:jovallerand@gmail.com
Location:Montreal, Canada
Day Job:I work as a department coordinator in an IT consultancy firm.
Designing:Five to ten years.
Webpage:subsurfacegames.ca
Blog:subsurfacegames.ca
BGG:jvallerand
Facebook:Subsurface Games
Twitter:@jvdesignsgames
Instagram:@games_subsurface
Find my games at:subsurfacegames.ca
Today's Interview is with:

Jon Vallerand
Interviewed on: 7/16/2020

In this interview we meet Jon Vallerand, designer of With a Smile & a Gun, which is currently funded on Kickstarter and reaching for a few stretch goals before the campaign wraps up at the end of this week. Read on to learn more about Jon and his many other projects.

Some Basics
Tell me a bit about yourself.

How long have you been designing tabletop games?
Five to ten years.

Why did you start designing tabletop games?
I've always turned every hobby I got into in a creative endeavour: I've tried writing fantasy stories, then poetry, than graphic novels, then movies, then standup, then making an RPG. With board games, it felt more like I was in the right place, like it's the kind of things my brain just groks.

What game or games are you currently working on?
I am currently working on a coop dice drafting game about superheroes trying to gain the public's appreciation; a trick taking game about running a PR firm; a set collection card game about journalism; and a worker placement/time track game about running an art gallery.

Have you designed any games that have been published?
My game With a Smile & a Gun is currently on Kickstarter (up to August 14)!

What is your day job?
I work as a department coordinator in an IT consultancy firm.

Your Gaming Tastes
My readers would like to know more about you as a gamer.

Where do you prefer to play games?
In my basement!

Who do you normally game with?
I have two game groups which, pre-Covid, used to meet weekly, and I play a weekly game of Arkham Horror LCG with my brother.

If you were to invite a few friends together for game night tonight, what games would you play?
These days I really want to play Power Grid! It's one of the best games that doesn't have a satisfying online version, and I haven't played it in too long!

And what snacks would you eat?
I'm not much of a snacker during game time, but I'd definitely be drinking a LOT of coffee.

Do you like to have music playing while you play games? If so, what kind?
I don't. I don't mind instrumental music, but to be honest, I'd rather be in a quiet room. I feel like they encourage discussion, and also make it easier to focus.

What's your favorite FLGS?
It's called Imaginius. They organize a lot of cool events to grow the local community and do a great job of it!

What is your current favorite game? Least favorite that you still enjoy? Worst game you ever played?
My favorite is Arkham Horror LCG, which is interesting because it's so far out of the type of games I usually enjoy. My least favorite game that I'd still play is Wazabi. It's a sushi-themed game with a Scrabble-like tile laying mechanism. I don't like it much, but my partner does!

What is your favorite game mechanic? How about your least favorite?
My favorite mechanism is engine building, especially card based stuff. I love the feeling of going from "I'll take 2 coins and a point" to "I'll take 8 coins, which gives me 6 wood, which I can then turn into a castle...". My least favorite mechanism is probably rolling for success. I hate the feeling of building up to your action, and potentially coming up empty because you were unlucky. I don't mind randomness changing my plans, but I mind it when it means I do nothing on my turn.

What's your favorite game that you just can't ever seem to get to the table?
Power Struggle. It's an awesome area majority game with a brilliant bribery mechanism, but it really requires a very specific type of people to enjoy it.

What styles of games do you play?
I like to play Board Games, Card Games, RPG Games, Video Games

Do you design different styles of games than what you play?
I like to design Board Games, Card Games

OK, here's a pretty polarizing game. Do you like and play Cards Against Humanity?
No

You as a Designer
OK, now the bit that sets you apart from the typical gamer. Let's find out about you as a game designer.

When you design games, do you come up with a theme first and build the mechanics around that? Or do you come up with mechanics and then add a theme? Or something else?
I am usually inspired by mechanisms, but my designs are then driven by moments. Whether I come from theme or mechanisms, I try to ask myself "what intense, memorable moments can arise from this", and then design towards those.

Have you ever entered or won a game design competition?
I've entered a few, and the feedback I got was worth its weight in gold.

Do you have a current favorite game designer or idol?
I really look up to Phil Walker Harding, who keeps on making games with simple rulesets, yet which can be played and enjoyed at many different levels of expertise.

Where or when or how do you get your inspiration or come up with your best ideas?
By playing other games, by reading about other fields or creative disciplines.

How do you go about playtesting your games?
There are a few local design groups I like to take part in, and I am also fortunate enough to have several dedicated playtesters around me.

Do you like to work alone or as part of a team? Co-designers, artists, etc.?
I like both. My first finished design was a co-design, and I would never have gotten past the early stages without a co-designer for that added accountability. That being said, I feel like co-designs have both a higher ceiling and a lower floor: working in a team that's a poor fit is terrible. That being said, even on solo designs, I usually try and get one of my playtesters into a special "lead" role, get them to an expert level at the game so I have someone to bounce ideas off of as well. As part of self-publishing With a Smile & a Gun, I've worked with multiple people --artist, graphic designer, rules editor, solo designer--, and it's been a blast to share that project with others.

What do you feel is your biggest challenge as a game designer?
Lack of time? Between family and work, there isn't a lot of time, or more accurately a lot of energy, left for design.

If you could design a game within any IP, what would it be?
Fast and Furious. I want to make a game that captures both the progression from TV thief to superhero, and the ridiculously over-the-top physics breaking action.

What do you wish someone had told you a long time ago about designing games?
To ensure the core engine of the game works before spending time on creating more content for it.

What advice would you like to share about designing games?
Network. Talk to other designers, to other gamers, get your game out there. If you go to a convention, don't just schedule pitch meetings and go to your room in between them: hang out, talk to people. Not only will it make your design career go so much faster, but the people are the best reason to get into board games.

Would you like to tell my readers what games you're working on and how far along they are?
Games that will soon be published are: Cartographia (working title)
This is what I have currently crowdfunding: With a Smile & a Gun, a 2-player dice drafting, area majority game set in a noir Prohibition era
Currently looking for a publisher I have: Art Traders, a one-way street worker placement game about running an art gallery Off the Record, a set collection card game about being a beat journalist
Games that I'm playtesting are: SuPR (working title), a game about superheroes who want the public to like them
Games that are in the early stages of development and beta testing are: An old PR trick (working title), a trick taking game about a PR firm
And games that are still in the very early idea phase are: Too many to list!

Are you a member of any Facebook or other design groups? (Game Maker's Lab, Card and Board Game Developers Guild, etc.)
Most of them!

And the oddly personal, but harmless stuff…
OK, enough of the game stuff, let's find out what really makes you tick! These are the questions that I'm sure are on everyone's minds!

Star Trek or Star Wars? Coke or Pepsi? VHS or Betamax?
Firefly, Perrier, Netflix

What hobbies do you have besides tabletop games?
There are other hobbies? I love NBA basketball and reading about psychology, especially behavioural science.

What is something you learned in the last week?
That you can add emojis on PC by pressing Wn + .

Favorite type of music? Books? Movies?
90s rock, YA fantasy, and the Fast and Furious franchise, after they became superheroes

What was the last book you read?
A Conjuring of Light, the 3rd book in the Darker Shade of Magic series

Do you play any musical instruments?
No! I once took a music class where I was demoted to clapping, and asked not to do it loud enough that others would hear.

Tell us something about yourself that you think might surprise people.
I don't remember colours. Unless someone says it, in which case I remember the word, my memory doesn't store that information for some reason!

Tell us about something crazy that you once did.
I quit my stable teaching job for an entry-level position in IT, a field in which I had zero experience and even less training.

Biggest accident that turned out awesome?
I broke my ankle and had to stop playing football. While not awesome, I'm not sure I would have gotten through college while playing.

Who is your idol?
My partner. She has this desire to help others, and the ability to emotionally separate her stuff from theirs.

What would you do if you had a time machine?
Probably go back and tell young me that what they think matters doesn't, and that some stuff they don't think of does.

Are you an extrovert or introvert?
I think I'm a semi-extrovert. If there's no extrovert in a group, I'll take that mantle, but if somebody else is enjoying the spotlight, I'm okay letting them have it.

If you could be any superhero, which one would you be?
Nightcrawler. I think teleportation is the most useful superpower there is.

Have any pets?
2 cats

When the next asteroid hits Earth, causing the Yellowstone caldera to explode, California to fall into the ocean, the sea levels to rise, and the next ice age to set in, what current games or other pastimes do you think (or hope) will survive into the next era of human civilization? What do you hope is underneath that asteroid to be wiped out of the human consciousness forever?
I hope the Fast and Furious movies survive, except for Tokyo Drift. That one can be forgotten.

If you'd like to send a shout out to anyone, anyone at all, here's your chance (I can't guarantee they'll read this though):
Sure! I'd like to tell the person who is currently reading this -- yes, you!--, that they rock!

Just a Bit More
Thanks for answering all my crazy questions! Is there anything else you'd like to tell my readers?

Black lives matter. Trans lives matter. The planet is dying. Wear a mask.




Thank you for reading this People Behind the Meeples indie game designer interview! You can find all the interviews here: People Behind the Meeples and if you'd like to be featured yourself, you can fill out the questionnaire here: http://gjjgames.blogspot.com/p/game-designer-interview-questionnaire.html

Did you like this interview?  Please show your support: Support me on Patreon! Or click the heart at Board Game Links , like GJJ Games on Facebook , or follow on Twitter .  And be sure to check out my games on  Tabletop Generation.

Friday, September 4, 2020

Tech Book Face Off: Data Smart Vs. Python Machine Learning

After reading a few books on data science and a little bit about machine learning, I felt it was time to round out my studies in these subjects with a couple more books. I was hoping to get some more exposure to implementing different machine learning algorithms as well as diving deeper into how to effectively use the different Python tools for machine learning, and these two books seemed to fit the bill. The first book with the upside-down face, Data Smart: Using Data Science to Transform Data Into Insight by John W. Foreman, looked like it would fulfill the former goal and do it all in Excel, oddly enough. The second book with the right side-up face, Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow by Sebastian Raschka and Vahid Mirjalili, promised to address the second goal. Let's see how these two books complement each other and move the reader toward a better understanding of machine learning.

Data Smart front coverVS.Python Machine Learning front cover

Data Smart

I must admit; I was somewhat hesitant to get this book. I was worried that presenting everything in Excel would be a bit too simple to really learn much about data science, but I needn't have been concerned. This book was an excellent read for multiple reasons, not least of which is that Foreman is a highly entertaining writer. His witty quips about everything from middle school dances to Target predicting teen pregnancies were a great motivator to keep me reading along, and more than once I caught myself chuckling out loud at an unexpectedly absurd reference.

It was refreshing to read a book about data science that didn't take itself seriously and added a bit of levity to an otherwise dry (interesting, but dry) subject. Even though it was lighthearted, the book was not a joke. It had an intensity to the material that was surprising given the medium through which it was presented. Spreadsheets turned out to be a great way to show how these algorithms are built up, and you can look through the columns and rows to see how each step of each calculation is performed. Conditional formatting helps guide understanding by highlighting outliers and important contrasts in the rows of data. Excel may not be the best choice for crunching hundreds of thousands of entries in an industrial-scale model, but for learning how those models actually work, I'm convinced that it was a worthy choice.

The book starts out with a little introduction that describes what you got yourself into and justifies the choice of Excel for those of us that were a bit leery. The first chapter gives a quick tour of the important parts of Excel that are going to be used throughout the book—a skim-worthy chapter. The first real chapter jumps into explaining how to build up a k-means cluster model for the highly critical task of grouping people on a middle school dance floor. Like most of the rest of the chapters, this one starts out easy, but ramps up the difficulty so that by the end we're clustering subscribers for email marketing with a dozen or so dimensions to the data.

Chapter 3 switches gears from an unsupervised to a supervised learning model with naïve Bayes for classifying tweets about Mandrill the product vs. the animal vs. the Mega Man X character. Here we can see how irreverent, but on-point Foreman is with his explanations:
Because naïve Bayes is often called "idiot's Bayes." As you'll see, you get to make lots of sloppy, idiotic assumptions about your data, and it still works! It's like the splatter-paint of AI models, and because it's so simple and easy to implement (it can be done in 50 lines of code), companies use it all the time for simple classification jobs.
Every chapter is like this and better. You never know what Foreman's going to say next, but you quickly expect it to be entertaining. Case in point, the next chapter is on optimization modeling using an example of, what else, commercial-scale orange juice mixing. It's just wild; you can't make this stuff up. Well, Foreman can make it up, it seems. The examples weren't just whimsical and funny, they were solid examples that built up throughout the chapter to show multiple levels of complexity for each model. I was constantly impressed with the instructional value of these examples, and how working through them really helped in understanding what to look for to improve the model and how to make it work.

After optimization came another dive into cluster analysis, but this time using network graphs to analyze wholesale wine purchasing data. This model was new to me, and a fascinating way to use graphs to figure out closely related nodes. The next chapter moved on to regression, both linear and non-linear varieties, and this happens to be the Target-pregnancy example. It was super interesting to see how to conform the purchasing data to a linear model and then run the regression on it to analyze the data. Foreman also had some good advice tucked away in this chapter on data vs. models:
You get more bang for your buck spending your time on selecting good data and features than models. For example, in the problem I outlined in this chapter, you'd be better served testing out possible new features like "customer ceased to buy lunch meat for fear of listeriosis" and making sure your training data was perfect than you would be testing out a neural net on your old training data.

Why? Because the phrase "garbage in, garbage out" has never been more applicable to any field than AI. No AI model is a miracle worker; it can't take terrible data and magically know how to use that data. So do your AI model a favor and give it the best and most creative features you can find.
As I've learned in the other data science books, so much of data analysis is about cleaning and munging the data. Running the model(s) doesn't take much time at all.
We're into chapter 7 now with ensemble models. This technique takes a bunch of simple, crappy models and improves their performance by putting them to a vote. The same pregnancy data was used from the last chapter, but with this different modeling approach, it's a new example. The next chapter introduces forecasting models by attempting to forecast sales for a new business in sword-smithing. This example was exceptionally good at showing the build-up from a simple exponential smoothing model to a trend-corrected model and then to a seasonally-corrected cyclic model all for forecasting sword sales.

The next chapter was on detecting outliers. In this case, the outliers were exceptionally good or exceptionally bad call center employees even though the bad employees didn't fall below any individual firing thresholds on their performance ratings. It was another excellent example to cap off a whole series of very well thought out and well executed examples. There was one more chapter on how to do some of these models in R, but I skipped it. I'm not interested in R, since I would just use Python, and this chapter seemed out of place with all the spreadsheet work in the rest of the book.

What else can I say? This book was awesome. Every example of every model was deep, involved, and appropriate for learning the ins and outs of that particular model. The writing was funny and engaging, and it was clear that Foreman put a ton of thought and energy into this book. I highly recommend it to anyone wanting to learn the inner workings of some of the standard data science models.

Python Machine Learning

This is a fairly long book, certainly longer than most books I've read recently, and a pretty thorough and detailed introduction to machine learning with Python. It's a melding of a couple other good books I've read, containing quite a few machine learning algorithms that are built up from scratch in Python a la Data Science from Scratch, and showing how to use the same algorithms with scikit-learn and TensorFlow a la the Python Data Science Handbook. The text is methodical and deliberate, describing each algorithm clearly and carefully, and giving precise explanations for how each algorithm is designed and what their trade-offs and shortcomings are.

As long as you're comfortable with linear algebraic notation, this book is a straightforward read. It's not exactly easy, but it never takes off into the stratosphere with the difficulty level. The authors also assume you already know Python, so they don't waste any time on the language, instead packing the book completely full of machine learning stuff. The shorter first chapter still does the introductory tour of what machine learning is and how to install the correct Python environment and libraries that will be used in the rest of the book. The next chapter kicks us off with our first algorithm, showing how to implement a perceptron classifier as a mathematical model, as Python code, and then using scikit-learn. This basic sequence is followed for most of the algorithms in the book, and it works well to smooth out the reader's understanding of each one. Model performance characteristics, training insights, and decisions about when to use the model are highlighted throughout the chapter.

Chapter 3 delves deeper into perceptrons by looking at different decision functions that can be used for the output of the perceptron model, and how they could be used for more things beyond just labeling each input with a specific class as described here:
In fact, there are many applications where we are not only interested in the predicted class labels, but where the estimation of the class-membership probability is particularly useful (the output of the sigmoid function prior to applying the threshold function). Logistic regression is used in weather forecasting, for example, not only to predict if it will rain on a particular day but also to report the chance of rain. Similarly, logistic regression can be used to predict the chance that a patient has a particular disease given certain symptoms, which is why logistic regression enjoys great popularity in the field of medicine.
The sigmoid function is a fundamental tool in machine learning, and it comes up again and again in the book. Midway through the chapter, they introduce three new algorithms: support vector machines (SVM), decision trees, and K-nearest neighbors. This is the first chapter where we see an odd organization of topics. It seems like the first part of the chapter really belonged with chapter 2, but including it here instead probably balanced chapter length better. Chapter length was quite even throughout the book, and there were several cases like this where topics were spliced and diced between chapters. It didn't hurt the flow much on a complete read-through, but it would likely make going back and finding things more difficult.

The next chapter switches gears and looks at how to generate good training sets with data preprocessing, and how to train a model effectively without overfitting using regularization. Regularization is a way to systematically penalize the model for assigning large weights that would lead to memorizing the training data during training. Another way to avoid overfitting is to use ensemble learning with a model like random forests, which are introduced in this chapter as well. The following chapter looks at how to do dimensionality reduction, both unsupervised with principal component analysis (PCA) and supervised with linear discriminant analysis (LDA).

Chapter 6 comes back to how to train your dragon…I mean model…by tuning the hyperparameters of the model. The hyperparameters are just the settings of the model, like what its decision function is or how fast its learning rate is. It's important during this tuning that you don't pick hyperparameters that are just best at identifying the test set, as the authors explain:
A better way of using the holdout method for model selection is to separate the data into three parts: a training set, a validation set, and a test set. The training set is used to fit the different models, and the performance on the validation set is then used for the model selection. The advantage of having a test set that the model hasn't seen before during the training and model selection steps is that we can obtain a less biased estimate of its ability to generalize to new data.
It seems odd that a separate test set isn't enough, but it's true. Training a machine isn't as simple as it looks. Anyway, the next chapter circles back to ensemble learning with a more detailed look at bagging and boosting. (Machine learning has such creative names for things, doesn't it?) I'll leave the explanations to the book and get on with the review, so the next chapter works through an extended example application to do sentiment analysis of IMDb movie reviews. It's kind of a neat trick, and it uses everything we've learned so far together in one model instead of piecemeal with little stub examples. Chapter 9 continues the example with a little web application for submitting new reviews to the model we trained in the previous chapter. The trained model will predict whether the submitted review is positive or negative. This chapter felt a bit out of place, but it was fine for showing how to use a model in a (semi-)real application.

Chapter 10 covers regression analysis in more depth with single and multiple linear and nonlinear regression. Some of this stuff has been seen in previous chapters, and indeed, the cross-referencing starts to get a bit annoying at this point. Every single time a topic comes up that's covered somewhere else, it gets a reference with the full section name attached. I'm not sure how I feel about this in general. It's nice to be reminded of things that you've read about hundreds of pages back and I've read books that are more confusing for not having done enough of this linking, but it does get tedious when the immediately preceding sections are referenced repeatedly. The next chapter is similar with a deeper look at unsupervised clustering algorithms. The new k-means algorithm is introduced, but it's compared against algorithms covered in chapter 3. This chapter also covers how we can decide if the number of clusters chosen is appropriate for the data, something that's not so easy for high-dimensional data.

Now that we're two-thirds of the way through the book, we come to the elephant in the machine learning room, the multilayer artificial neural network. These networks are built up from perceptrons with various activation functions:
However, logistic activation functions can be problematic if we have highly negative input since the output of the sigmoid function would be close to zero in this case. If the sigmoid function returns output that are close to zero, the neural network would learn very slowly and it becomes more likely that it gets trapped in the local minima during training. This is why people often prefer a hyperbolic tangent as an activation function in hidden layers.
And they're trained with various types of back-propagation. Chapter 12 shows how to implement neural networks from scratch, and chapter 13 shows how to do it with TensorFlow, where the network can end up running on the graphics card supercomputer inside your PC. Since TensorFlow is a complex beast, chapter 14 gets into the nitty gritty details of what all the pieces of code do for implementation of the handwritten digit identifier we saw in the last chapter. This is all very cool stuff, and after learning a bit about how to do the CUDA programming that's behind this library with CUDA by Example, I have a decent appreciation for what Google has done with making it as flexible, performant, and user-friendly as they can. It's not simple by any means, but it's as complex as it needs to be. Probably.

The last two chapters look at two more types of neural networks: the deep convolutional neural network (CNN) and the recurrent neural network (RNN). The CNN does the same hand-written digit classification as before, but of course does it better. The RNN is a network that's used for sequential and time-series data, and in this case, it was used in two examples. The first example was another implementation of the sentiment analyzer for IMDb movie reviews, and it ended up performing similarly to the regression classifier that we used back in chapter 8. The second example was for how to train an RNN with Shakespeare's Hamlet to generate similar text. It sounds cool, but frankly, it was pretty disappointing for the last example of the most complicated network in a machine learning book. It generated mostly garbage and was just a let-down at the end of the book.

Even though this book had a few issues, like tedious code duplication and explanations in places, the annoying cross-referencing, and the out-of-place chapter 9, it was a solid book on machine learning. I got a ton out of going through the implementations of each of the machine learning algorithms, and wherever the topics started to stray into more in-depth material, the authors provided references to the papers and textbooks that contained the necessary details. Python Machine Learning is a solid introductory text on the fundamental machine learning algorithms, both in how they work mathematically how they're implemented in Python, and how to use them with scikit-learn and TensorFlow.


Of these two books, Data Smart is a definite-read if you're at all interested in data science. It does a great job of showing how the basic data analysis algorithms work using the surprisingly effect method of laying out all of the calculations in spreadsheets, and doing it with good humor. Python Machine Learning is also worth a look if you want to delve into machine learning models, see how they would be implemented in Python, and learn how to use those same models effectively with scikit-learn and TensorFlow. It may not be the best book on the topic, but it's a solid entry and covers quite a lot of material thoroughly. I was happy with how it rounded out my knowledge of machine learning.