I assembled the Mycroft II dev kit I received from NeonAI. I don’t presently have access to all of my tools and it came with some additional parts for a 3D printed enclosure. I resorted to duct tape in parts to get things stablee. It’ll have to do until I can get the files and print the remaining parts.
me: “play radio indie” mycroft: “cannot find radio indeed” me: “play radio indie” mycroft: plays radio Hindi.
The folks at mycroft recieved a lot of flak during the holidays because many people were out while several folks received their mark IIs and discovered that it was running mycroft-dinkum. Not receiving what you expect is a bit disappointing but this announcement yesterday gives me some hope for privacy focused voice assistants. Speech to text at the edge
Voice assistants, even the open source ones, don’t support WPA3 community.mycroft.ai/t/this-is…
I’ve done my #mycroft development for the week. Didn’t get as far as I would have liked but hopefully I’ll slowly be able to get some people interested in porting skills over to the mycroft mark II with me. It’s a cool piece of hardware and it’s a shame that a lot of the functionality had to be stripped out to make something stable for consumers.
Rather than use the radio skill which has quite a bit of GUI to it (station, play, stop, next, etc), I’ve looked to the IP address skill for inspiration. I’ll need to define a spot for the help commands and figure out how to format them for this screen.
Looking to the skills folder in the mycroft-dinkum file proved to be a good bet. /skills/play-radio.mark2/ui contains a few .qml files and images and the gui has a function to register the gui in the __init__.py file.
Found an interesting troubleshooting tip that I hadn’t seen elsewhere:
“Hey Mycroft, show speech to text”. This will show you how Mycroft interpreted your last few utterances. Certain genre names are difficult to get right.
The mycroft-dinkum gui differs from the previous iteration’s gui. While the previous iteration has a robust set of documentation the documentation for dinkum’s gui is sparse. It looks like I’ll be reading through this and seeing if I can find a way to call/modify the gui within a skill. Thinking on it, the markII radio skill might have some insight because it has a “searching for music screen” that pops up.
Found this bit of code with an interesting comment while parsing the skills. It looks like there may be some leveled logging built into the classes. New to python, so not sure if this is something that’s part of the standard library or not.
""" Skills can log useful information. These will appear in the CLI and
the skills.log file."""
self.log.info("There are five types of log messages: "
"info, debug, warning, error, and exception.")
Doing some work on the mycroft II tonight. I’m trying to see if I can get the beginnings of a standard for adding a gui display for the help command on the voice assistant. The goal being that it functions somewhat similarly to a cli command with the voice command being {wakeword} {skill} {help}
Discussion of the various open source voice assistants associated with the mycroft project. community.mycroft.ai/t/mycroft…
Recieved a #mycroft II boot drive from NeonAI for or Neon AI OS and the packaging come with a quick start guide and then a 2 page sheet of the commands to get you started. The Mycroft came with a more designed, but much briefer setup guide. It makes sense to make the unboxing feel like a moment given how long people have waited for these things, but it’s clearly an early product and I think more documentation in hand should have been the focus.
Advice on training a #mycroft wakeword github.com/sparky-vi…
I haven’t felt the need nor had the time to do this yet but I’m sort of waiting to see where things shake out and how the other compatible softwares playout.
Bookmarking for when I later go to install neon on the mycroft community.mycroft.ai/t/neon-ai…
