Here's a new page containing some frequently-asked questions not related to LinnStrument.
June 30, 2024
I've been enjoying the recent developments in generative AI, using text and sometimes media to generate images, videos or audio recordings. But these are all final rendered results. Instead, I'd like to see a generative AI that uses text or audio input to create an interactive engine, for example a real time interactive image or video synthesizer, an interactive app or game, or more importantly to me, an interactive MIDI Synth.
Imagine this:
Roger: "Create a browser-resident MIDI synthesizer, one that responds naturally to live expressive MPE data from my LinnStrument connected to my computer, and has with screen controls for 1) morphing between between the generative parameters--not just the volume mix--of a viola, a Les Paul through a Marshall, and Jan Hammer's solo synth sound, 2) consonance/dissonace, 3) thinness/thickness, 4) brightness, and 5) harmonic complexity."
(The AI generates the synth in a browser, with the request screen controls. I play it from my LinnStrument.)
AI: "How's this?"
Roger: "Not bad, but it needs a few tweaks. First, the Brightness control sounds too much like a lowpass filter. Make it respond more like the natural brightness of an acoustic instrument that is played louder. And replace the Jan Hammer sound with Keith Emerson's solo synth sound from "Lucky Man".
(The AI makes the requested tweaks to the generated synth.)
AI: "Is this closer to what you are imagining?"
Roger: "Yep. You nailed it. Thanks."
Such an AI could be trained on audio recordings and text discusssions from synthesis forums, YouTube synthesis channels, synthesis posts on social media, etc.
I like this idea because it would realize the promise of AI-- to change from humans adapting to technology to technology adapting to humans. I also like it because so much of modern synthesis requires learning engineering principals and terms like "lowpass filter" or "resonance", instead of using musical terms.
If anyone is aware of the development of such an idea, please let me know.
Update July 2, 2024: The day I posted this, my friend Tim Thompson let me know about websim.ai, which takes a big step in the direction of what I described above. It uses text prompts to create a web site including text, sounds, generator images, malleable 3D objects and more. Here's a video summarizing its current capabilities.
-------- Updated February 4, 2024 post. See below for original August 2023 post ---------
I bought and received my Apple Vision Pro two days ago and have been enjoying learning it.
The takeaway is that it’s a very well-designed AR/VR headset. It’s also a very difficult product to design, full of design challenges and requiring hard compromises. All things considered, I think Apple largely made the right choices for the product’s goals.
Among the technical innovations are:
1) High enough resolution to display fairly small text clearly. That’s big.
2) Accurate eye and hand tracking. Simply look at a screen button and touch your fingers together to select it.
3) Your space is quickly and automatically scanned by LIDAR. No need to manually define your space. And the scan is accurate— AR objects are firmly locked to their position in your space.
4) Fits nicely into the Apple ecosystem, running nearly all iOS/iPadOS apps, with lots of optimized apps coming.
5) Considering all the high technology they’ve packed in, and that the battery is external and fits in your pocket, I find it very light and comfortable.
High Resolution Displays
Of these, #1 is a big one. Most VR headsets have a resolution of around 20 pixels per degree (PPD). Experts say that human retina resolution is around 60 to 70 PPD. By my measurements, the Vision Pro achieves a resolution of 58.6 PPD!
The bad news is that in order to squeeze so many pixels that close together, Apple chose to reduce the total field of view (FOV), which by my measurement is a mere 77 degrees. Of that 77 degrees, each eye sees only 65.5 degrees, resulting in a binocular overlap (the portion of each eye screen that is seen by the other eye) of 82.5%. I estimate that each of the eye screens has 3840 pixels horizontal, resulting in my estimated 58.6 PPD. However, this small FOV makes it seem a little like you’re looking through a pair of binoculars. As a basis of comparison, the Meta Quest 3 has a FOV of 110 degrees, a full 42% wider.
But the Quest 3’s resolution isn’t good enough to clearly read text except at large font sizes. Apple needed to make a convergence device that used as much as possible of the Apple ecosystem of apps, permitting their goal of “spacial computing”. That means displaying small text clearly. So their decision was to use extremely dense and expensive per-eye displays (I estimate 3840h by 3072v pixels) and to prioritize high resolution over field of view. Given that this decision makes the device usable for normal high-detail computer work, I think it was the right choice.
Not powerful enough for high-quality VR worlds
The one Apple compromise that I personally regret is that the device’s M2 CPU isn’t powerful enough to run the type of complex and detailed virtual open worlds that are typically created in game engines like Unreal or Unity. (I wrote about this in my 2023 post below.) So there probably won’t be any apps of this type or if so, they’ll be simplified and more cartoon-like.
I understand why. For a standalone device, the CPU needs to live next to your forehead and not burn a hole in it.
Think about how much processing this thing is doing. The Vision Pro’s two eye screens have a combined 23 million pixels, which must be refreshed 90 times per second. To render that many pixels at that high frame rate requires an enormous amount of compute power. By comparison, the Valve Index, a popular PC VR headset, has a per-eye resolution of 1440 by 1600 pixels, which is 4.6 million pixels total for both eyes, a mere 20% of the Vision Pro’s resolution.
Given Apple’s focus of a standalone wearable convergence device, I agree with their design compromise. However, I wish they could have added one or both of two hardware features:
1) Provide a lower-resolution video mode that treats each 2x2 pixel group as a single lower-resolution pixel. That would drop the total stereo pixels per frame down by 75% to 5.75 million, and lowering the processing requirement proportionally.
2) Provide the option for the Vision Pro to act as a remote headset tethered to a high-powered Mac like a Mac Studio, providing far more CPU/GPU power than the built-in M2 and R1 chips.
Summary
I agree with Apple’s very difficult design choices. Overall, the Vision Pro is an amazing piece of technical innovation. And as it gets smaller, cheaper and more powerful with each new model, we gradually move toward a future where a light pair of glasses does what Vision Pro does and far more.
---------- Original August 2023 post ----------
I’ve been using VR (virtual reality) since 2019 and enjoy it very much. Though I’ve tried PC-based VR headsets, my main experience is with the Meta Quest 2 because I’m a mac user and don’t want to buy a PC or deal with the PC-VR setup issues.
The Quest 2 isn’t great quality, with limited resolution, poor dynamic range, poor optics, and somewhat uncomfortable to wear. It’s a little like an Android phone strapped to your face. But it’s the only choice for an all-in-one headset, and at only $299 (US), it’s an easy way to experience and learn about VR now.
How do I use it? I don’t use it for any work activities because the display is too fuzzy for text and it doesn’t work well with a computer keyboard or trackpad, plus I don’t trust Meta to have access any of my important documents.
I don’t use it for games that involve shooting, fighting, action, or zombies because I have no interest in those things.
I don’t use it for music apps because VR lacks the precision to play notes or chords or musical expression, though it has merit for conducting/arranging of musical objects like clips, samples, beats, etc.
I don’t use Meta’s “Horizon” metaverse because 1) it isn’t very beautiful, consisting of various combinations of simple geometric objects, 2) there are too many unruly, screaming children, and 3) I’m not interested in entering a public space to make friends with people who are all concealing their identity. AltSpaceVR was a better alternative with more beauty and more interesting/creative people, but it no longer exists because Microsoft shut it down.
My primary interest is in games or apps that take me to beautiful imagined worlds, either for an activity or to meet with friends that I already know. I use it for the same reason that I take vacations— to discover/experience beautiful places and to enjoy the feelings, excitement and engaging conversations with known friends, experiences that tend to arise in such beautiful environments.
Interestingly, such beautiful imagined worlds can also exist on a computer or iPad or phone. But the difference is the level of immersion. To me, it feels much more real when the imagined world is all around you in 3D binocular vision, and completely replaces your surroundings. It gives me a similar feeling to being on vacation in a beautiful place.
Unfortunately, the Quest’s limited rendering power and visual quality limit the level of beauty, so even beautiful imagined worlds tend to be somewhat cartoony or fuzzy. But here are three examples of Quest apps I’ve enjoyed:
1) Walkabout Mini Golf: A collection of beautiful imagined worlds— a tropical island, a Japanese garden, a bucolic valley of windmills and gardens, the original Myst island, an undersea imagined city of Atlantis and more, each containing mini golf courses to be played with up to 7 other networked players. The Quest’s limited power constrains each world to only a very stingy 600,00 triangles, but where the game shines is in the creativity and skill of the artists who created maximum beauty within that limitation. Also, you can fly around any of these worlds, a fun experience.
2) Puzzling Places: This is a 3D jigsaw puzzle. The company uses photogrammetry to create 3D scans of beautiful places— rooms in European palaces, sculptures, towns and cities, landscapes, works of architecture, historical buildings, ancient monasteries, and more. You select one, chop it up into between 25 and 400 pieces, then assemble the pieces, hearing ambient sounds of the place as you do. Once finished, you can move your head in and around the assembled structures, like god floating down the streets of a miniature Mont Saint Michel.
3) The Room VR, a Dark Matter: This is a puzzle game with beautiful and realistic spaces— a museum, an archeological dig, a chapel, and more. The spaces are entirely imagined and fairly detailed, and impatient people like me appreciate the hints that pop up when it’s taking me too long to figure out a puzzle. Of course the problem with a story game is that once you’ve played it a couple times and know the solutions to the puzzles, there is less interest to play it again.
The Quest is good first step in the evolution of VR, but I look forward to the day when a VR headset has the resolution and processing power to run the sort of large, complex, highly detailed, realistic open worlds that we now see running under Unreal or Unity Engines on high-powered PCs with high-powered VR graphics cards.
In other words, my goal for a VR headset is to experienced imagined worlds that are close to the visual quality, detail and movement of reality. I am reminded of a conversation with my friend the late Dartmouth music professor Jon Appleton. After explaining VR to him, he asked “Can we both put on VR headsets and meet in a realistic cafe in Paris?”
That day may be getting closer. In June 2023, Apple pre-announced their Vision Pro headset, focusing on AR (augmented reality) by including cameras that pass the view of what’s around you through to the eye screens, mixed with generated visuals. They call this “spacial computing”. I want one.
Its resolution is impressive. Apple claims a total of 23 million pixels for both eye screens combined. A picture on the Apple site shows internal screen openings with an aspect ratio of around 5:4, so I’m guessing the per-eye resolution to be around 3840 pixels horizontal by 3072 vertical, which totals 11.8 million pixels, around half of 23 million. A reporter who had a demo estimated the field of view to be 110 degrees, so assuming a binocular overlap of 85%, that works out to about 40 pixels per horizontal degree. While that’s lower than the generally-accepted retina resolution of around 60 PPD, it’s far superior to Quest or any other VR headset other than a small number of expensive industrial models.
So Vision Pro has high resolution and does AR very well, but does it have the CPU and GPU power to achieve my above-stated goal, especially while painting 23 million pixels per frame? Doubtful. The Vision Pro has a base M2 CPU, along with a secondary R1 chip “to process input from the cameras, sensors, and microphones”. This suggests that the M2—the same chip used in the lower-end MacBook Air and not the more powerful M2 Pro, Max or Ultra chips— is doing all of the application processing and rendering. Unfortunately, a base M2 can’t do what a high-end gaming PC and a high-end VR graphic card can.
Perhaps Apple has designed in the ability to connect a Vision Pro to a high-power mac for the more demanding VR graphics applications. Or given Apple’s focus on AR, maybe they don’t care about more demanding graphics applications. They’re not saying yet.
There’s an interesting debate about the merits of AR vs. VR. The AR advocates see the tech eventually shrinking down to a lightweight pair of glasses that augments your reality wherever you go with all kinds of useful information related to what you’re seeing. When that happens, it’s a viable replacement for the smartphone.
But sometimes I don’t want to augment my reality. I want to replace it with an alternate, beautiful, imagined reality. Sometimes I want to take a vacation to an alternate beautiful place for a half hour. Or visit a movie theater where the large screen in the dark setting increases my immersion in the story. These experiences require VR or AR goggles that can block out what you’re seeing around you.
VR or AR, I find this new chapter of technology exciting, and I look forward to the next five or ten years of innovation as each new annual model gets better and closer to my goal, and makes me want to discard last year's model. :)
A: (2023) I had been working on a new drum machine with a working title of "LinnDrum II". I'm sorry to say that I've stopped working on it. In future I may partner with a larger company to resume development, but I have no current plans to do so. Here's why:
A: (2023) I appreciate the value that some people place on these old products and the famous songs that they were used to create. I do too, and I was very lucky that they were so successful. I'll always cherish those products and the music that creative artists used them to make.
First, I couldn't reissue the MPC60 or MPC3000 because though I designed them, they were Akai products. And regarding reissuing the LM-1 or LinnDrum, as I wrote in the previous FAQ, I'm less interested in drum machines than I was in the 1980s, I've always been more interested in the future and new ideas than in the past and nostalgia, and the current popular focus on reissuing old products sometimes feels to me a little like "Make Drum Machines Great Again" :)
From an engineering perspective, it's very tedious and boring to recreate a product for which the parts no longer exist, and not particularly fun for me.
The other thing is that while these old machines were very advanced for their day, by today's standards they are hopelessly outdated. Do people really want to return to the non-dynamic drum buttons, poor timing resolution, 8-bit samples, crude numeric display, or cassette data storage of the LM-1? Maybe it's better to use modern products while holding those old antiques in our memory as a fond recollection of an exciting earlier time.
Having written the above, you might find this page interesting.
A: Yes, please visit the Press page, which you can find lots of old interviews with me, talking about such things.
A: (updated 2023) Of all the ways to lose huge amounts of money, making a prototype of your idea is one of the most effective. First, there's a very good chance that others (and possibly many others) have thought of your product idea before, and the reason it isn't already on the market is either 1) others don't find it as valuable as you do, or 2) the necessary engineering or material costs would make it sufficiently expensive that few would buy it.
The first thing to do is to learn the true value of your product idea in the marketplace. One of the biggest mistakes people make is to think that everyone will value their idea as much as they do. First document your product idea, including a clear text description, drawings or 3D renderings and a realistic customer price. To arrive at the realistic customer price, don't use a price you'd like it to sell for, but rather what it must sell for considering the total parts cost, development cost, manufacturer profit and retailer profit. Then take an objective survey of people you know and don't know, asking them not if they like it but rather would they definitely buy it at the realistic price you've given. To insure they aren't just telling you what you want to hear, tell them it's someone else's idea, not yours, and don't appear to like or dislike it.
If you still want to make a prototype, try to find a way to make it for no more than $1000 and ideally for free. If you're not technical and you have some friends who are, get them excited about it and ask for free help in exchange for future payment if you make any money later. Important: try to avoid designing new circuit boards, embedded software (software that runs on the small computers inside self-contained products) and metal/plastic mechanical housings. Very commonly, people start doing this thinking they'll spend only a few thousand dollars then later find they've drained their relatives' savings only to teach themselves how difficult it is.
For many music product ideas, it's possible to--by yourself--create a functional prototype by connecting and reconfiguring a variety of existing low-cost hardware and software music/audio products. It won't be pretty but will be functional and therefore allow you to prove your concept at low cost and therefore give a better demonstration of its usefulness. For hardware and human interface (buttons, knobs, sliders, drum pads, etc.), use existing low-cost Midi control surfaces. To prototype the software function, it's often easier to use graphic programming environments like Max/MSP, Max for Live, or the free PD. For prototyping hardware and software, the low-cost Teensy and Audio Adaptor controller boards from www.pjrc.com is excellent and includes their Audio Library, a graphical tool for programming audio processing and synthesis. The Daisy Seed board from www.electro-smith.com is also excellent and has 24-bit audio in/out on the board, and similarly includes a library of audio processing and synthesis functions.
You might also consider taking an annual one-week workshop that I co-teach at Stanford university's CCRTA computer music school, in which students create their music project ideas based on the Teensy platform.
Regarding presenting your idea to a music products company so they will pay you a royalty and design/manufacturer it for you, this is a highly unlikely scenario. While companies are always interested in their customers' free suggestions, it's very unlikely that they will pay anybody for anything unless they absolutely have no choice. Often they will politely decline to hear your idea because 1) customers' products ideas are rarely unique, and 2) if they were already planning the same idea, they don't want you to later accuse them of stealing your idea. However, if they truly feel it's worth spending their money to make your idea into a product and they feel you have the necessary skills to help them, probably the best scenario is that they may offer you a job.
Regarding how to patent your idea, getting a patent is another great way to lose lots of money. You can't actually patent an idea but rather only the implementation of an idea. And it's unlikely that your idea is sufficiently unique -- or is not obvious to someone skilled in the field of your idea-- that a patent would do you any good. Also, having a patent doesn't prevent anyone from stealing your idea but rather simply gives you a better case for infringement if and when you must hire an expensive lawyer to sue them.
My best recommendation is to use some of the aforementioned tools like Teensy to make a functioning prototype of your idea, then put in in a stock metal or plastic box, drill holes for knobs or other controls, screen-print the front panel, figure out any other details, then build 10 or 25 of them and sell them yourself on your web site, all in evenings and weekends without quitting your day job. Don't worry about business licenses or regulations because no one will care until you're making significant money. In this initial phase, you'll learn how much others value your idea, and whether your design needs to be changed. Then only once you're confident that people value your design, you can look into manufacturing and expensive up-front costs like custom parts.
Having written the above, it is also true that there are few things more personally gratifying than the exhilaration of creating and using a product that came from your own idea. The good news is that, armed with a willingness to learn some of the inexpensive tools I've described above as well as a little self-honesty, you stand a better chance than ever before of turning your idea into a functioning prototype. If people like it, maybe make a few more, place an ad and sell them yourself while you figure out how to make it cheaper and prettier. Regardless of whether it makes you money or not, you will have taken a fascinating journey, learned valuable new skills, influenced the art of music-making and made a personal contribution to the world of ideas.