Skip to content
Back to list

Algorithms, Synths, and the Modeling Agency of the Future

Chief evangelist at Creative Force

Summary

Imagine a world where a creative director walks over to a workstation and says "we need a family, two parents and two kids, sitting down playing a game together." The person at the workstation punches a few keys, adjusts a few things, and boom, out comes a unique image that is exactly what the creative director asked for. Not a mock up, not a set of casting photos. The complete image, ready for use because you already own the license. You paid to license the technology, the algorithm that uses an ever growing library of hundreds of thousands of images (which also have been properly licensed) to generate this unique imagery. Our guest for this episode, Mark Milstein is Chief Operating Officer of a company called vAIsual and with the tech they've developed you may not have to imagine this world much longer.

Key Takeaways

  • vAIsual is a company that has created an algorithm and interface that creates entirely synthetic "models." These people do not exist in the real world, and are available for licensing without the legal considerations that come with hiring a real world model.
  • This technology can and will manifest in a lot of ways — SaaS, PaaS, and white label services are all in the future of this technology.
  • vAIsual shoots hundreds of thousands of images of real people every year. These individuals have signed model releases that release the rights of their biometrics to the company in order to be used as reference for the generations of synths.
  • The e-commerce photographer of the future may become something more of a synthographer, able to leverage their skills to generate content exactly in line with the creative brief.
  • This technology could allow for the extreme localization of content, allowing global brands to easily represent the various demographics they serve.

Links & Resources

Full episode transcript

Daniel Jester:
From Creative Force, I'm Daniel Jester, and this is the E-Commerce Content Creation Podcast.
Daniel Jester:
Imagine a world where a creative director walks over to a workstation, and says, "We need a family, two parents, two kids, sitting down and playing a game together." The person at the workstation punches a few keys, adjusts a few things, and boom. Out comes a unique image that is exactly what the creative director asked for. Not a mock-up. Not a set of casting photos. The complete image, ready for use, because you already own the license. You paid to license the technology, the algorithm, that uses an ever-growing library of hundreds of thousands of images, which have also been properly licensed, to generate this unique imagery.
Daniel Jester:
My guest for this episode, Mark Milstein, is Chief Operating Officer of a company called vAIsual. And with the tech they've developed, you may not have to imagine this world much longer.
Mark Milstein:
We will also make this available via an API, which will allow the white-labeling of the technology. So if you are, let's say, I don't know, you're Bank of America or Citibank, or, name your favorite savings and loan, and you have a marketing department, and you want to generate faces for local ads, rather than licensing that content through traditional stock platforms, you can then generate your own unique images and faces, based upon wherever your office might be: if it's in Singapore, or if it's in Canada.
Daniel Jester:
We invited Mark to discuss this with us after seeing an article on LinkedIn about AI-generated models who were already available to license on various stock platforms. Mark takes us through the tech, the impact to the industry, and the possibilities that this tech can unlock. Let's go.
Daniel Jester:
This is the E-Commerce Content Creation Podcast. Joining me for this episode, Mark Milstein, who is the COO of the company vAIsual, which you may have seen an article about a week ago, as of the time we're recording this, so by the time you hear this, maybe about two weeks ago, going around LinkedIn about algorithmically-generated models available for stock photography. Mark? Welcome to the show. And I'm very excited to talk to you about your algorithm models.
Mark Milstein:
Thanks a lot. Thanks a lot for having me. I'm looking forward to telling everybody, from the top of every mountain that I can climb up to, what we're doing.
Daniel Jester:
So, Mark, you and I met last week, and we kind of chatted a little bit about this. Give our listeners a run-down about vAIsual and what you guys do, and a little bit about this technology that you've developed.
Mark Milstein:
Right. So, basically, vAIsual has created what we like to describe as the world's first commercially-available algorithmic camera. It's basically a piece of technology that combines legally clean, biometrically-released, datasets, real-life datasets of real people, which act as the film, so to speak, and an amazing AI-powered technology, an algorithm that we've pioneered, to create 100% synthetic photography. Photography generated from a command line, from a search box. I can type into a search box, "Happy woman standing on the side of a road, looking at a sunset", and it will generate that photo. That is not available at this very moment, whereas the complexity of photography is not available at this very moment, but that is the general aim of the technology.
Daniel Jester:
So I guess, thinking about it as... I may have been simplifying it a little bit, I guess, when I say, "The idea of the modeling agency," because it's not just the individual in the photo that is algorithmically-generated and available for, like you mentioned, use for stock photography, licensable; you're not worried about getting a model release, you're not entering into negotiations. You're- however you set up this business model, you're paying a fee, and you get to use that image. But it's also just the image itself. And you mentioned it being sort of the algorithmic camera. Or, the entire photo is sort of, in some ways, being generated. Can you speak a little bit to the nuance that I'm describing there?
Mark Milstein:
Sure.
Daniel Jester:
There's a difference between just creating the model, and also just wholesale creating this environment that you're talking about.
Mark Milstein:
Right. So, this is a multi-stage adventure. At this very moment in time, we're able to generate real humans, on the fly, by typing into a command line, a search box, simple query: "happy man with beard, smiling". "Unhappy man with beard, angry". "Young boy with red shirt and red hair in a neutral expression". However, very shortly, we'll be combining those humans with any variety of backgrounds. Whether that be interiors or exteriors, landscapes; all kinds of a variety of environments, that will create synthetic photography, because our people don't exist, that rival anything that comes out of today's DSLR cameras. With equal complexity.
Mark Milstein:
So, in other words, rather than sitting there, or rather than having a photographer and a stylist and production assistants and models come together to create photography, you will need to call on somebody, which we like to call a "synthographer", who has the skills of a photographer and a graphic designer, and all of the aforementioned other jobs, and is able to type into the search box a creative query to generate a photograph that previously took many, many people and many days in a studio to generate.
Daniel Jester:
I definitely want to come back to "synthographer" and this idea, but the next question that I wanted to ask you about, though, was about these models and these people. In particular, this article from PetaPixel was about these... something like 100 stock photo models that were able to be licensed; at the day that article went live, these were available to use. And this is not exactly the same thing as rendering a character for a video game, so to speak. Your algorithm pulls from actual datasets that were created with actual people who exist in the world-
Mark Milstein:
Right.
Daniel Jester:
Who have signed over this information, that your algorithm then pulls from to create entire new... individuals, so to speak. Is that true? Do I have it accurately?
Mark Milstein:
Correct. So, right now, we have a studio, and we're shooting six days per week, and we have been so now for the past year, and have committed approximately 300,000 images, which has been taken from well over 1000 individual models, and those images are then fed into our algorithm. That algorithm then generates hundreds of new, never-existed-before, synthetic humans, from that raw data. And those are static images; they're not dynamic, so in other words, they are static. But it deeply can conjure them up in that dynamic fashion. As I just described to you earlier, we're able to type, into a command line and search field, a request for a certain type of person.
Mark Milstein:
But, yes, all synthetic photography comes from a real, live person. At a certain point. But what comes out of the algorithm doesn't resemble, in any way, shape or form, that original person. There is no ability to even link one to the other. All of our models have signed biometric releases. All of our images are GDPR-compliant. GDPR is the ultra-stringent European Union personal rights law, that prohibits companies from exploiting the biometrics of any human being, and/or using that for dynamiting without their permission. And so we have acquired those rights from these models, which then allows us to make legally clean results.
Daniel Jester:
That's a great segue into the next question that I wanted to ask you, which is, the audience of our podcast is largely photo studio professionals, and I think probably mid-to-senior level managers, more so than photographers or stylists. What impact do you see this technology having on an industry like e-commerce, where there's a constant need for models, for talent, for all of that sort of thing? And I'm really interested to hear from you on the legal ramifications of this. As you mentioned, the GDPR, those guidelines around personal... You know, your individuality belong to you. And, what that means for licensing these individuals, or these... How do you refer to them, Mark? "Individuals"? Or, each of these... creations that your algorithm has created. Maybe I'm personifying them a little bit too much. But, licensing these algorithmically-generated models, what impact do you see that maybe having on an industry like e-commerce, or retail at large?
Mark Milstein:
So, we like to refer to them as "synths".
Daniel Jester:
Hmm.
Mark Milstein:
That's our go-to word at this moment. Ultimately, somebody else might come up with a better word. But we call them "synths", as opposed to just reproductions or outputs. Well, first and foremost, look: synthetic humans do not exist. You can use them for any campaign or message. No release forms. No rights restrictions. Full freedom from any legal hassles. You know, if I have a pharmaceutical campaign that is pushing a cream to possibly solve a sensitive problem, that maybe some models might not want to put their face or name to it, and so this allows for that. There are many other uses that are certainly first and foremost in our minds, for synthetic media. What I call "low-hanging fruit". There are so many people right now who presently just vacuum-clean the web for images without licensing them, and so synthetic media, because of the cost attached to it, which is basically the cost of electricity, might find an easier... Rather than stealing from photographers that work hard to make their images available, they might now lean towards taking the synthetic photography.
Mark Milstein:
We don't believe that synthetic photography will wipe out the photo industry as it is presently. I think it'll just be another means by which content creators or creatives within the advertising industry source their images. In other words, they have another choice. And I think another thing that'll happen is that really well-made, really highly creative photography will have a greater value. There will be much greater value added to human-generated content when it's compared to its synthetic counterpart. Synthetic media, as I said, is more aimed towards a price-sensitive user from the very beginning. And so, therefore, higher-priced photography will continue to retain its value all throughout the entire content creation trip.
Daniel Jester:
Let's come back to the role of "synthographer". And you just kind of mentioned... And there's been some discussion, especially, in the creative production for e-commerce industry about the role that product renderings are going to play. There's huge swathes of studios out there whose job it is to shoot widgets for various companies that, you know, it's not terribly difficult, in this day and age, to CGI-render a spark plug. So does it make sense to pay a photographer to shoot a spark plug, or what have you?
Daniel Jester:
And, as I'm learning, as I've talked to a lot of professionals in the industry specifically about CG and computer rendering stuff, there still very much is a role for people who know lighting, and know composition, and all of this sort of thing. So you mentioned earlier that, with this technology, there's a path for people who have these skills and expertise in photography and lighting and composition, to come in and up-skill themselves to become synthographers, and be able to generate these images.
Daniel Jester:
And I'd like to talk with you about that a little bit, and also just about the future. Where, now, we have our model, our synth who's going to be our model, and now we need to layer on environments and lighting and drama, because we're still, at the end of the day, selling a wool blazer to somebody somewhere. So, can you tell me a little bit about that role? The synthographer? And the evolution of the photographer into this sort of role?
Mark Milstein:
Right now, the synthographer does exist. Just not with that title. That person is very adept at using any one of Adobe's tools. They are incredibly adept at taking and layering on any one of 100 layers of content, in order to then fashion a never-before-existent piece of unique content. And those skills that they presently use will just be better-focused, or aimed, at creating synthetic photographers. We're just going to change their tools, right? Rather than using Adobe Premiere, rather than using Photoshop, rather than using After Effects, so on and so forth, Illustrator, they're going to use vAIsual tool, over any of our competitors' tools, to generate photography and/or generate a piece of content that integrates elements of synthetic photography. In the same way that Canva, platforms like Canva, have also shifted things into this direction.
Daniel Jester:
That's what's becoming clear to me, is, the tools become different, and that there's... This is not unlike any other advancement in technology, which is that you have an opportunity to adapt to it, and learn that technology, and be on the cutting edge of that, or... move into something else. I don't know. That's a little bit of a harsh way to put it. That's not at all what I intend. But I think that there's been, especially amongst photographers, I think I shared with you that years ago, I sort of saw the writing on the wall when I was a product photographer, and I was shooting probably a spark plug, and thinking, like, "We're not going to need to do this a whole lot longer, so I probably want to think about..."
Daniel Jester:
You know, just thinking about me, selfishly and individually, my career and my path, and that moving into something like studio management made more sense for the long term. But then that was a pretty pessimistic way of thinking about it. Which is, the pessimism is, the technology replaces the individual. The optimism is that the individual is the artist and the technology is the tool. And I think that's a really hopeful and optimistic way of looking at something like this, where it now becomes, for a product photographer in 2022, it goes from being kind of a scary thing to being an exciting frontier of creation. And at the end of the day, that's what we want to do, is be creatives, and create things. And I certainly didn't want to get bogged down in all of the problems associated with trying to shoot a spark plug on set. I wanted to be creating things. In a lot of ways, I think this enables that.
Mark Milstein:
Right. Well, you know, as we like to say on our website, and to investors and potential partners, that throughout history, men and women have visually expressed themselves, using every type of method and surface imaginable. Cave paintings. Oil paintings. Celluloid, glass, tin. Digital cameras. Your telephone. Every one of those tools has a limit. This is a completely different thing. By leveraging Generative Adversarial Networks, GANs, as well as highly-developed algorithms, this is a [inaudible 00:15:30] shift. And while the spark of human creativity will never be replaced... Every artist, every non-artist, will soon be able to create visual content without any additional tools, only using words, or sliders they'll be able to slide back and forth on a screen.
Mark Milstein:
"I want to age this person. I want to change their ethnicity. I want to merge genders." So on and so forth. And they'll be able to then generate their vision, by using these tools. You can imagine that the Brownie camera in the hands of George Eastman had on oil painters. And everybody back in the late 19th century had a formal painting of themselves made, or at their family. That disrupted that. And then, of course, the digital camera disrupted the film industry. Just ask Kodak and Fuji how that kind of went. And then, all of a sudden, the camera on your telephone shows up, and DSLR manufacturers are... and, what do you call it, point-and-shoot camera manufacturers are no longer the center of the universe.
Mark Milstein:
At the same time, Photoshop and CGI tools, and every other kind of thing... Canva, as I've mentioned earlier... show up, and allow people who don't have photography skills or studio skills to create content worthy of any major studio production, with just the flip of a mouse... or, I should say the click of a mouse, and the pressing of a button. The power that any one of those tools puts in their hands is extraordinary. And they, by their skillsets, have taken a lot of the power that previously was in the hands of the photographer, and put it to themselves.
Mark Milstein:
And so technology is driving things. This is a no-different transition, I think. Although it's certainly much more powerful at this moment in time, it's a [inaudible 00:17:19] shift. All of a sudden, we're going to go from mechanical, chemical, and silicon-based generation of content to one of words, as I always like to say, instead of being the tool to search for content, a repository of content, metadata, the search terms, the word, "man/woman/child", the "outdoors/indoors", becomes the tool to generate it, and effectively, the DNA code of the asset itself. That is a tectonic shift in content creation.
Daniel Jester:
Whoo! Mark, you elicited the goosebumps from me, man. "Metadata is the DNA of the asset" is a powerful thought, and I don't think far off in my day job, with Creative Force. Metadata is a big part of our product, and enriching our customer's assets with metadata is... We know that it's an important part of content creation, but it's not fully understood, the impact, that what we're going to be able to do with that metadata and that information. We know we can do some very powerful organizational things today with it. But what does that mean in the metaverse? What does that metadata mean in the future, when we have whole, entire industries that are sold in a sort of virtual environment? Powerful words. That's got to go on a T-shirt or something somewhere. Maybe we'll have a podcast merch store, and we'll put your quote on a T-shirt.
Mark Milstein:
Great idea. I can't take full credit for that, because that actually comes out of the words of a gentleman by the name of Ralph Windsor, who's the editor and publisher of a web-based publication called DAM News, Digital Asset Management News, out of the UK. And he actually proffered up, the first time, the core of that statement. And I've riffed on it. Ultimately, it needs to go back to him.
Daniel Jester:
I'm on Ralph Windsor's Planet DAM website right now, and there you are. "Mark Milstein Interview," right? Top of the list there. Very interesting. I was not aware of this person, and there's lots of good reading here. We're going to link to this in the show notes for our listeners.
Mark Milstein:
What that means in the future is that not just commercial content, not just social content, but even memories and dreams will be translated to pixels by simple commands. Imagine, you wake up in the morning, and you think, "Oh, man, I had this dream last night that I was with my friends, back in college, and we were..." you know, whatever we were doing.
Daniel Jester:
Right. College things.
Mark Milstein:
And I guess you would... College things! I could just simply type that into a search bar, and generate an image of that. Or, who knows. Ultimately, we might [inaudible 00:19:57] the same kind of thing that we use right now to input our fingerprint into our telephone, have a sensor on our phone which allows it to read our thoughts.
Daniel Jester:
Hmm.
Mark Milstein:
It's not out of the realm of possibility. I mean, biometric readers do exist. And I firmly believe that telephones will soon have those kinds of ports on them. Where you can stick your finger on it, as it reads your pulse, as it knows your blood sugar, as it reads your iris, as it does many, many things right now, imagine that you're thinking about something, and it's able, through those thoughts, to actually generate an image of what you've been thinking. Or a coma patient that can't speak, but can tell us what he's thinking.
Daniel Jester:
Yeah. That's interesting, and in some ways, terrifying. Only because I feel like the dreams that I generally remember are the ones that leave a deep emotional impact on me the next morning!
Mark Milstein:
Right! Right! Right.
Daniel Jester:
But you sparked a thought in me, Mark, and I apologize for the quick pivot on this, but I want to chat about it with you, because it's super interesting to me. Which is that, in, I think, a lot of ways, this kind of content creation could become, certainly, deeply, deeply personal. And I think we're probably even seeing, to some extent, the impact of it on the work that you're doing today. Because you've got teams in a studio who are shooting models and individuals who have released their biometrics to your company for this use. But their fingerprint is not completely off of it, because your photographer, or whoever's facilitating this process in your studio is interacting with these individuals, and is probably...
Daniel Jester:
I wanted to ask you earlier, but the time didn't seem right, about what the process of shooting these individuals actually looks like. Because, you know, do you have a list of expressions you ask them to make? Or do you just hang out with them for a little while, turn on some music? But either way, the fingerprint of the people who are capturing this data exists and is there. A photographer who's shooting a model who has that spark, that magic chemistry with that model, but maybe not with the next person, it occurred to me that, through this process, that as individuals start to use your software and build these constructs, that their individuality will have an impact on the things that they are creating, because of things like machine learning, and the inputs that they have, and the energy they have while creating it.
Mark Milstein:
Well, that's a very good question. And the answer to your question, I can tell you, is that there is a huge amount of interaction between the photographer and the models. And this is the process. I'll give it to you. It's a very straightforward process. A model call goes out; models reply; those models are scheduled to come into the office, or the studio, I should say; they are sat down, and educated, in the same way that I'm describing to you, and I have described to your audience, what it is exactly we're doing: what exactly are we doing here? And they're shown examples of what we're doing. They're given a model release, which is a GDPR-compliant biometric release, which allows for only aforementioned rights to be granted to us. It's explained to them, as it was you. Multiple times, if required.
Mark Milstein:
They are then asked to exhibit a number of standard emotions, and then are given the opportunity to do some wild card expressions. Literally, to go nuts. Each model is photographed approximately 300 times. So, there's a 300 individual shots that are taken, and then those are then edited down to about 250 to 270. They're then tagged, with a proprietary fixed vocabulary of terminology, and then those images are put into a database. Those real-life datasets, I should say, are made available to clients, who may want to do similar work to what we are doing. So, not only are we generating the content, but we're also making the means to generate that content available to anybody else on this planet.
Mark Milstein:
So, in other words, we are no different than... Imagine if, let's say, Getty Images or Shutterstock also sold cameras and sold film, or sold digital media and sold cameras. And so we do everything. So we not only make the data sets available, but we also, ultimately, license out the synthetic medium, and we also can make available our algorithm as well.
Daniel Jester:
Excellent answer, and a great segue to... For the last few minutes of this episode, I'd like to talk with you about how this technology comes to market. Do you see multiple avenues for bringing this to people? I know that one option would be going to a service like Getty, and clicking a button with some keywords, and there it is. Do you see licensing this to private content creation studios, or brands themselves, and that sort of thing? What do you think the future of this technology looks like in the marketplace?
Mark Milstein:
Everything you've just said and more. So, initially, the easiest way for this content to reach the biggest audience, at this moment, is via the traditional licensing platforms: Shutterstock, Getty, Alamy, Pond5, so on and so forth. And that is where we started on. This content is already available via Germany's largest stock image agency, PantherMedia, and their smaller sister company, SmarterPics.com. And, shortly, we'll be available via a number of other licensing platforms. And we will also make this available via an API, which will allow the white-labeling of the technology.
Mark Milstein:
So if you are, let's say, I don't know, you're Bank of America, or Citibank, or name your favorite savings and loan, and you have a marketing department, and you want to generate faces for local ads, rather than licensing that content through traditional stock platforms, you could then generate your own unique images and faces based upon wherever your office might be. If it's in Singapore, or if it's in Canada. Or if it's in Namibia. Depending on how localized you want to get, this would allow for exotic stream localization of result of models, so on and so forth.
Mark Milstein:
There will also be the means to access this technology off of your telephone. There will be a vAIsual button. And/or another button, which has vAIsual's technology under the hood. And so there are any one of 100 different ways for this technology to be made available to the average user. And to the professional user. We are in the midst of developing a content suite, our very, very rich content suite, which will mimic what you see right now with Canva. Or Adobe Premier, or Adobe Photoshop, After Effects, Illustrator, so on and so forth.
Daniel Jester:
Very interesting. And, yeah, I think all of those things make sense. For the record, my favorite banking institution: Fifth Third Bank. Shoutout to my Midwest friends. The creative director of Fifth Third Bank is going to be able to walk to their synthographer and say, "We need a family of four talking about financial health at a dinner table. Go to work."
Mark Milstein:
Correct. And they could speak with us now about licensing content, because we have images; they're generated by the minute here. And we're already in discussions with one of the world's largest banks for just such an agreement. Allow them to localize faces for their... I think they're in 40 or so countries right now. And so their model should look a little bit more French; their model should look a little bit more North American; their model should look a little bit more Southeast Asian. And they'll now have the ability to do that on the fly.
Daniel Jester:
There's so many questions that I feel like are still outstanding. None of which are along the sort of path of questioning that we've done here. But I'm going to sneak one more in before we wrap up this episode, and then I feel like we might have an opportunity to discus this more in the future together. But this one question that I want to ask you is, we had another episode where we talked about GANs and CGI and rendering, and we talked about Oh Rozy, Korea's first totally virtual influencer. And this is an individual who is like... She's synthetic? But it's the same person in all of her photos on Instagram. For our listeners, you can go to rozy.gram on Instagram.
Mark Milstein:
Right.
Daniel Jester:
And my question to you is, I see this as being something that a lot of brands would be attracted to, which is creating an individual that pops up a lot in their media, and maybe ages with the brand. Or maybe has a family. What are your thoughts on this kind of interpretation of this technology, going back to, and building a relationship with someone, a synth, that sticks around, and becomes a part of your brand's portfolio?
Mark Milstein:
That's a great question. And I'm laughing a little bit, because there have been moments here in the office where we've actually felt like we wanted to develop relationships with some of the outputs that we've made. Some of the people look so sympatic. So much a part of the kinds of people that we would wanna be friends with. We've said to our friends, "Hey, look at this guy! Look at that girl!" We've said to ourself, "Wow, if they walked through the door right now, we would be instantly drawn to them, in the most human sort of a way." And so that goes on all the time. It never ceases to surprise me. And so the answer to your question is that, oh, heck, it's gonna happen! And we know it will.
Mark Milstein:
The fidelity to photography that we are currently able to produce is... Well, there is no difference! There is no daylight between the two. The images are photography. There is nothing synthetic about them, at first glance. Unless, of course, you were well aware that this person does not exist, you would assume that that person... you know, you went to school with them. You met them at the mall yesterday. "Oh, yeah, I saw him or her at the supermarket. They're my wife, or my girlfriend, my husband, my brother's old friend." And that is an interesting moment, when you get to that point. And then you say to yourself, "Oh my gosh? How could that possibly be? They don't exist." It is a stunning moment.
Daniel Jester:
Yeah. Rozy's Instagram account, I'm just kind of flipping through it as we talk about this, is phenomenal. Down to the attention to detail of the bad lighting environments that she's taking a photo in. Like one here that I saw, that she was on the floor of a convenience store, and I detected just a subtle hint of the reflection of the yellow package of chips behind her on her forehead. It's really remarkable, to think that Rozy doesn't exist in the real world as we know it, but is inhabiting this virtual world that is her Instagram account.
Mark Milstein:
That's right! And I just have to note, also, that she has a birth date; she has an age, forever: 21. And her life motto is "Hakuna Matata". And so applying human attributes to synths will absolutely be the next teenage fad, in the same way that the little... Sometimes the Japanese have come out with toys and things that have been very attractive to young girls or young boys. And this absolutely will happen, that people will grow absolutely, 100% attached to a synthetic output, to a synth, and apply all kinds of birth dates, names, so on and so forth, to them. And will take ownership, if that's the right word for it. Take ownership, or proprietorship? I have no idea. Husbandry. I can't think of any better word. I can't think of any word that would... Partnership? That somebody might want to apply to the synth.
Daniel Jester:
Not to repeat my AI joke too many times on this podcast, but I've seen the movie Ex Machina, and I know how this may end.
Mark Milstein:
Yes!
Daniel Jester:
So, I approach with caution, I would say, but it is extremely interesting. Mark, thank you so much for taking the time to talk with me about this today, and I definitely think there will be an opportunity for us to have some follow-up conversations. It's a very deep topic, very interesting topic, and it will have, no doubt, profound impacts, on my industry and the world at large. Why don't you tell our listeners where they can learn more about your company before we close out the show?
Mark Milstein:
Well, thank you very much. They can go to www.vAIsualdot com. They can link to us on Twitter. They can link to us on Facebook. They can find us on LinkedIn. And, looking forward to hearing from anybody that might have any questions. They're certainly more than welcome to reach out to me directly on LinkedIn. I will answer any email as quickly as possible.
Daniel Jester:
Excellent. And we'll make sure to put your... We generally put our guests' LinkedIn profile in the show notes, so, for our listeners, we'll put all of this stuff in the show notes as well. Mark, thanks again for coming on, and for taking time out of your day to chat with us about this. And I mean that sincerely. I think that I will, probably immediately upon ending this conversation, think of six more questions to ask you. So I'll save them for next time.
Mark Milstein:
Perfect. Looking forward to hearing from you guys the next time, as well.
Daniel Jester:
All right. Thanks, Mark.
Mark Milstein:
Bye-bye.
Daniel Jester:
That's it for this episode of the E-Commerce Content Creation Podcast. Many thanks to our guest, Mark Milstein, and thanks to you for listening. This show is produced by Creative Force. Edited by Calvin Lanz. Special thanks to Sean O'Meara. I'm your host, Daniel Jester. Until next time, my friends.

About the host

Chief evangelist at Creative Force

Daniel Jester is an experienced creative production professional who has managed production teams, built and launched new studios, and produced large-scale projects. He's currently the Chief Evangelist at Creative Force but has a breadth of experience in a variety of studio environments - working in-house at brands like Amazon, Nordstrom, and Farfetch as well as commercial studios like CONVYR. Creative-minded, while able to effectively plan for and manage a complex project, he bridges the gap between spreadsheets and creative talent.