[Yvonne] Good evening all. I'm Yvonne Apolo from the School of Law at the 51²è¹Ý. Thank you so much for being here with us this evening. We still have audience members logging in, so while you wait, you might like to share in the chat function where you're joining us from today. I'm beaming in from our 51²è¹Ý campus on Dharawal country, taking advantage of the reliable internet connection on campus. It'd be great to see where some of you are situated today. Okay. Many people have joined us now, so we'll officially commence. As I mentioned, I'm Yvonne Apolo. I'm a lecturer in the School of Law within the Faculty of Business and Law at the 51²è¹Ý. I will be moderating our webinar this evening, and it's an absolute pleasure to welcome each and every one of you here today. Whilst I hail from law, my current research falls at the intersection between privacy law, psychology and emerging technologies, where I'm particularly concerned with what the future of privacy protection looks like in our digital century. So with that in mind, I'm extremely eager to be in conversation with our expert panellists this evening who will address from various disciplinary perspectives, the topic of self-image and authenticity online. For those new to the series, luminaries brings together leading UOW researchers, industry experts and thought leaders for one and a half conversation, 1.5 hour conversation each month. Through this series, we will discover how research and collaboration at the 51²è¹Ý and beyond is tackling global challenges. Before I introduce our group of luminaries for this evening's webinar, I would like to first acknowledge country.
[Yvonne] On behalf of the university, I would like to acknowledge that country for Aboriginal peoples is an interconnected set of ancient and sophisticated relationships. The 51²è¹Ý spreads across many interrelated Aboriginal countries that are bound by this sacred landscape. An intimate relationship with that landscape since creation. From Sydney to the southern Highlands to the south coast, from fresh water to bitter water to salt, from city to urban to rural. The 51²è¹Ý acknowledges the custodianship of the Aboriginal peoples of this place and space that has kept us alive, that has kept alive the relationships between all living things and kept us alive. The University acknowledges the devastating impact of colonisation on our campuses footprint and commit ourselves to truth telling, healing and education. I'd also like to acknowledge First Nations people from all around the globe, as we will have colleagues joining us from across the, the globe, this webinar.
[Yvonne] In today's Luminaries webinar, we're in conversation with thought leaders to discuss research on the impacts and implications of emerging technologies, with a particular focus on how changes in our engagement, or entanglement with digital media intersects with our emotional well-being, without our own self-concept, and indeed with the authenticity of authenticity. Of course, the phenomena of using digital platforms and curating our self-image through content shared online is no longer new. Facebook, after all, became accessible to the general public back in 2006. However, we're now living in a moment where surveillance capitalism is embedded and pervasive in our lives, which has implications for the way we develop and experience our autonomy. It's a moment where the widespread availability of generative AI is confusing, and even masking the distinction between real and artificial. And at the same time, social media algorithms are demanding frequent and increasing forms of vulnerability from often very young users.
[Yvonne] So it's within this context that the panel today will discuss why it's important for us as individuals, as parents, and as a society to understand new and emerging technologies and to develop into an awareness of how they shape our experience of the world. So without further delay, I will now introduce and welcome our guests panellists. First up, we have Doctor Katina Michael, director of the Society Policy Engineering Collective, professor at Arizona State University school for the Future of Innovation in Society and School of Computing and Augmented Intelligence, and an Honorary Professor within the Faculty of Business and Law here at UOW. Next we have doctor Yves Saint James Aquino, a physician and philosopher. From UOW ACHEEV, which is the Australian Centre for health, Engagement, Evidence and Values, situated in the School of Health and Society. We are also joined by Doctor Jasmine Fardouly, a research fellow in the School of Psychology at the University of New South Wales. And finally, I would like to welcome Doctor Michael Mehmet, Associate Professor and co research Lead in the School of Business at the 51²è¹Ý. What a diverse and exceptional group we have here tonight. Thank you for being here with us. For the audience, the plan for this session is that in the first half of the webinar, we'll be hearing from each of our panellists about their research and their perspective on issues around self-image, identity and authenticity online. In the second half of the webinar, panellists will address audience questions, so we encourage members of the audience to submit their own questions using the Q&A function at any point in in the session today, and we will try to get through as many of these as possible. First, let's hear from Katina, who's achieved incredible research impact investigating the ethical, legal and social implications of emerging technologies. I'll pass it over to you, Katina.
[Katina] Thanks so much, Yvonne, for that wonderful opening. So folks, I'm going to focus on the technologies, but tell you a story, a story of tales. But I'll frame it in the context of invention, innovation, implications and that inflection point I think we're at at the moment in history. So let's begin with the invention of the mirror. What does the mirror have to do with what we're talking about today? Well, it was discovered some 8000 years ago in southern Turkey, known as Anatolia. And it was made from obsidian, a volcanic glass that people could look at and see their reflection, the age old game of peekaboo that gives our children that boost of understanding and recognising the self image is a bit like a mirror where we look and reflect on who we are, and perhaps also begin to cognitively develop those skills that say perhaps we're okay. We recognise our own image over time, building for us some self confidence and some self care and self love that allows us to purport and bring forward mental health that's positive. I can look at my image and love my image. I can look at the self and care for the self and feel compassion and kindness to myself. But again, this feeling of a mirror which allows us to reflect on our own image and then allows us to grow into adulthood and forge a path forward in positive acknowledgement of who we are and who we become as we develop and grow. But why is this image important? Well, it was the first selfie ever taken, sometime after the discovery of the camera. And this is by Robert Cornelius. So we can say that the selfie is a new phenomenon, but in fact, it's hundreds of years old since the actual, creation and invention of the camera. We see here the first camera before us on the left hand side. And then below that, another selfie, showing the hope that potentially we could look into the camera as it gazed outward, perhaps reflect back on us. And after many years, 200 years or so, we are now miniaturised. This capability and the gaze inwardly has turned to us. It is something that is normalised. So we can also look back at cultural change and say, well, maybe it was the 1930s. All the people in the photos started to look quite serious and glum. Many of us didn't realise they would wait sometimes 30 minutes for that photo to be taken. You had to stand still. You had to be still. But some of those older photographs are very revealing compared to the 1950s. On the right panel where we can see some women in Victoria, Australia, posing for an article in the 1950s. Was this a cultural change or was it something else? On the left, again, a wedding photo looking quite glum actually, compared to our wedding photos today. And on the right, a photo from 1906 or 1904. Story of a man smiling while eating rice. Well, they did have expression back in the 1900s. They didn't all look like the photos that we're, we will know on the left. But what's happening here? Well, the photo on the right was taken by an anthropologist with a particular perspective about how to capture people in the scene. But today, what do we have? This filtering of imagery. Perhaps this changing of one's self all through the push of a button on the right, a photo that is natural, looks beautiful. And I often say real is beautiful. And on the left one that has been filtered. In fact, I know people that take filtered images 100% of the time. In fact, on average, people filter their images 70% of the time, and they'll even filter things like bags, even if there's no human in the image. So we can talk about innovations through time, the ability to digitally store images, which is a recent phenomenon, and play them back even in a video reel. We can talk about social media entering the days and the game, and social networking and the quantification through algorithms. How many people have liked my comment? How many people have viewed my comment? How many people follow me? For example, we can quantify in social media terms. Smartphones have allowed the miniaturisation of the camera and the gaze to be inward, and the effortless filming of things and camera apps like filtering and the ability to have covert or overt cameras has changed the landscape, as well as AI shaping, deepfakes and so much more. Promise in the metaverse. Now, the implications of all of these inventions and innovations are many fold. We can talk about oversharing, that temptation, perhaps to get more likes by revealing more of oneself. However, that doesn't always work. And increasingly, the life cycle of a particular, social media post is now not even, a 14th of what it was five years ago. So the half lives, decreasing because our attention spans are decreasing. And sometimes the oversharing does end up in criminal acts. This can also create mood swings in people, even verbals, as they come out of sharing too much. Sometimes people talk about depression, self-harm and self-harming techniques like cutting, the praising of anorexia through these, different applications and capabilities. And of course, even attempted suicide that has been filmed, to share again online. Cyberbullying and trolling are additional implications of all of these innovations. Now we can look back to Narcissus, where he was approached by echo in mythology, and Theresius had warned Narcissus, his mother. You know, beware of Narcissus actually seeing his own reflection, because he will be transfixed by that reflection. He will endlessly gaze down and probably end up shrivelling and dying and turning to what we know as a flower, a daffodil. But in fact he was transfixed. He did reject the love of echo, and he was more concerned with the reflection, despite the person or the body embodiment of echo being nearby. So he tragically, in some of the interpretations, was beating his breast to be with the person that he was saying that was himself, and, and bruising his own shoulders and chest. You know, why can't I be with my own reflection? Although he did not realise it was him. But today we have different reflections, our permanent reflection that others can also gaze. For example, Instagram, Facebook and so many other platforms like TikTok, which transfix us to the digital and perhaps keep the personal and physical at bay. So this whole notion of that mirror to begin with, and then the mirror of ourselves and our reflection going away from that self-care and self-love to this narcissistic, over-the-top, preoccupation with the self. And this is happening through the online storage capability that allows us to go back and look and be concerned with that which is online. And that famous old adage of mirror, mirror on the wall, who's the fairest of them all? And of course, the mirror till the entry of Snow White answers that the queen is the fairest of them all. But today, how does the mirror potentially reply when we ask it? And what is that mirror? And is it a dark glass instead of this clear glass? What is that mirror we're gazing into, as opposed to the natural beauty of the world? We also have this visual here of Justin Timberlake in his song 'mirrors' and mirrors are like cameras. In fact, the only thing that's different is the camera captures the image, but the mirror reflects as Narcissus actually reflected on his own image. And here we see Justin Timberlake, these three fractals of himself. But who is he? Which of the images is Justin or the true Justin? And a dear friend of mine, Sally Aplin, talks about this poly social reality. We don't know who we are online or we are different people in a curated form, in different platforms, with different identities in each of these different gazes. So the inflection points I'll briefly talk about here before closing and what is an inflection point. In mathematics, we talk about a point of a of a curve at which a change in direction of curvature occurs. But in business we talk about a time of significant change in a situation, a turning point. And I think we are at that turning point because we're starting to understand social media in a different way, in a way that says, well, we want less harm and more positive good to come out of it and what good and what benefits can come out of it. But if we look at it from an ethical, legal and social perspective, in the ethical realm, we can talk about human rights, the acknowledgement that we are human, we are independent. Despite the digital push, the push to be online, the push to overshare. We have freedom. We have dignity and the autonomy of the self. In corporate personhood. What we have to see, however, is a response from corporate decision makers the need to be held accountable for their actions and corporate social responsibility, some kind of soft law, some technical standards to allow us to thrive in the legalistic perspective, and I know, Yvonne, you have much to share with us on this, this area in particular, we have individual intellectual property rights. And who owns my image these days when some companies are scraping 40 billion images and amassing them in databases, in the criminal and civil law, new technologies bring with them new benefits, but also they are pushing the boundaries of criminality in some aspects, particularly of underaged users in the social realm, health and wellbeing and thriving. How can we build and better designs to allow us, to to flourish online instead of shrivel online and be subject to harms? And then this need, this inflection point to understand social emotional regulation? How can I regulate my emotional responses to online content? What is acceptable behaviour for the self and for others? So with that, Yvonne, I think I'll close and, ask you what you think about the legalistic perspective.
[Yvonne] Well. The. It's quite complicated when we're talking about exactly who owns an image. And there's much dissatisfaction, particularly in Australia, with the state of, privacy law. And traditionally, privacy law would be the area that we will turn to when we're talking about, the protection of our self image. One of the concerns that I would have is not necessarily it's not only around the, the, the ownership of an image and how much we're sharing online. But then, of course, what he's done with that, and in particular, all the data analytics that is performed on the basis of what we share, and all of the micro targeted personal personalisation and predictions that are made about us when we share those images. And from a legal perspective, often we're talking about the fact that it's doom and gloom, that the law fails to keep pace with innovations in technology. And that it's hard to see a change from in the context of Australia. There is some good news in that the government is paying a lot of attention to some of the issues, which we'll probably hear about more as the session unfolds. There are myriad, issues associated with sharing of our image online and the way we, behave on social, social media platforms and the like. But we've had the ACCC, inquiring into digital platforms and digital advertising and the have made recommendations to enhance consumer privacy protection. We have had the Australian Human Rights Commission look into issues around, human rights and technology and have made recommendations that perhaps we should require human rights impact assessments to be performed before government agencies and private organisations use AI informed decision making systems. In 2023, we had the Attorney-General's Department issued their final report for, for the Privacy Act review, and they made a suite of recommendations for reform that would increase the parity of Australian privacy laws. With the position in the EU, which is said to be, you know, have a better position in relation to data protection, the protection of one's self image. And the Australian government has subsequently provided in-principle support for a number of these recommendations. So reform in this space seems to be around the corner. Just to cap off this list of perhaps the good news is that we currently have the Senate Select Committee inquiring, into adopting artificial intelligence in Australia. And in addition to exploring the benefits around that, we have, part of the terms of reference, a focus on examining the risks and harms around the use of AI technologies in Australia. So there's a lot happening. But in saying that, there's a huge difference between inquiring into some of these issues and actually taking action. There's been a lot of talk, a lot of writing, but not much action from a legal perspective. What's my perspective here? I think that there are a number of parallel, changes that could be made in relation to what we do in law. I think that we need lawyers and lawmakers to be working in multidisciplinary teams, replicating, you know, the disciplines that are here within this panel so that there's a better understanding of the nature, of digital technologies and the uses to which these technologies can be put. Because we want to avoid, on the one hand, knee jerk and wholesale reforms, that are ill informed. But on the other hand, waiting until we see actual harm coming from the way that we're using, sharing information. And then the technology's entrenched, which is arguably the position we're in when we're dealing with data analytics and the commodification of personal data as a business model. I think that you've listed some areas, you've listed some relevant areas of law there on your slides that need to ensure that they are applying in an effective way to the issues that we're seeing, today around personal information protection. Consumer law is another one that we could add to the list. It could perhaps better protect against problematic, personalisation and predictive analytics, other manipulative uses of data. By introducing an unfair trading practices prohibition, we could have certain practices prohibited under the Privacy Act. We could have, discrimination law, amended such that, indirect discrimination arising from AI systems could be, guarded against. But I think and to wrap up this response, which is probably going along many tangents, I think what's important is to take a values based approach to our next steps here, instead of just trying to minimise known harms. Think about the values that are at play when we are talking about the impacts and implications of technology. I think at the heart of our concerns around technological innovation is a desire to safeguard the potential for human flourishing. By which I mean, to realise our individual potential and to protect human dignity. And I think that law only has a cursory understanding of these concepts. So learning from those disciplines that are actually shedding light on what it means to develop individual autonomy and to treat humans with dignity is an important step in the creation, interpretation, application of law. But in a way, I've, I've, I've, prefaced a lot of the impacts of technology without us first talking about what some of those are. And so will, I would like to ask you a question, so that you can elaborate on some of this. Katina, you have given us this beautiful, beautiful, maybe potentially disturbing account of history about the, historical, relationship with self image. There's been a lot of it's been a long history. There's a long trajectory here in relation to sharing our self image and our preoccupation with self image. But a lot of innovation recently. So what do you see as being the next steps you.
[Katina] Well, firstly to say I loved your comments Yvonne, and no wonder we're working together on sift law and many other, ways forward and strategies to overcome and address these challenges that we're all living through. And I think there is this, psychosocial, emotional, technological thing happening here. And we call it the pacing problem in lectures we've given together. And you've guest lecturer at ASU for us. And I just want to focus on, on that area of values. I think we are needing more media literacy. We need better design. We need multi-stakeholder consultation. We need everyone to get involved. We need, corporations, to be honest, about how they're utilising, the data that we're actually sharing. And, not to always think about profit, but the well-being of others. There are a lot of things we can talk about here, but, I do look forward to hearing what the other speakers have to say. Perhaps we can return to this later on. But thank you so much for your wonderful, legal perspective, Yvonne. And I'll pass the baton on.
[Yvonne] Thank you Katina. Yes, it would be good at this point to hear from the other panellists and what they have to say in relation to the impacts implications of these technologies. So, Yves, I might hand over to you if you could please tell us about your research on this topic.
[Yves] Sure. Thanks so much. It's wonderful to follow from Katina now. And thanks Yvonne, for moderating. So for my part, I will talk a little bit about, beauty apps and the ethics of static evaluation through the lens of artificial intelligence. And I believe this is going to be quite a specific case study of what Katina has, discussed during her talk. So when we when we talk about beauty apps, these are software applications that are designed to evaluate one's appearance, usually by providing a score. Sometimes it's 1 to 10. As you can see in the example on this slide, this is called the Golden Ratio Face mobile app. others provide score from 1 to 100.
[Yves] So you can say that beauty apps are an extension of social media apps with filtering capabilities. And that also includes zoom at the moment. Some beauty apps include, functionalities that provide information to track beauty, maintenance services. How often you go to a particular, cosmetic clinic or cosmetic surgeon. And some beauty apps allow users to virtually simulate cosmetic treatments, including cosmetic surgery. And some of these apps also suggest specific interventions. More and more, a lot of these manufacturers are claiming the use of artificial intelligence and machine learning to improve accuracy of these apps. If you look at the example of the Golden Ratio Face mobile app, they recently included categories such as gender and ethnicity. To claim that they're trying to improve the accuracy of their apps. After criticism that a lot of these standards or metrics that they are using are really Western centric. So the idea that they're now trying to consider, a person's race or ethnicity is, allegedly an improvement. And of course, that raises the question how exactly they are classifying ethnicity or race. A lot of the claims of these beauty apps is really based on, a golden mask that was developed by a cosmetic surgeon. And these, this is referred to as Mark quartz, Golden Mask. The idea is that the mask is superimposed over a photo of a face to evaluate that face. And any deviation from the lines would require cosmetic surgery. And it's supposed to be based on the golden ratio, specifically illustrated by Davinci's Vitruvian man. And the idea here and the claim of the surgeon is that this kind of mask is objective and the university applicable. This golden mask I studied for my Ph.D. project, which examined the ethical implications of pathologizing ugliness in cosmetic surgery. And what pathologizing ugliness means is that a lot of cosmetic surgeons, in Australia and internationally, are framing unattractive features as something that's pathological and requires medical or surgical treatment. And a couple of research questions that I'm really interested in, that I'd also like to hear from my co-panelists and from you Yvonne, the first one is a philosophical question to what extent can we really automate the evaluation of a person's appearance? And one of the issues when it comes to any kind of automation, whether it's artificial intelligence enabled or not, is that it's really carrying the baggage of beauty ideals, which historically we know is smart with prejudices based on gender, based on race, and based on disability. So to what extent will AI automation, replicate these kinds of prejudicial judgements? And then the second, question I'm interested in discussing and looking into in my own research would be what are the ethical and psychosocial implications of an app that explicitly rates a person's appearance? So this could be in the form of one's own judgement about themselves to how they're engaging with these kinds of apps, impact on one's own image. How does this impact on how we view other people and how we interact with other people? And the fact that we are now putting scores and putting extra explicit instructions on what you can do to improve your, beauty score. So these things, I believe, are really important questions, that we are, that we need to examine. And I'm, I'm really grateful that I get to have the opportunity to be one of the researchers looking into this issue. And I'd like to stop sharing. Thank you so much.
[Yvonne] Thanks, Yves. That's really fascinating. I'm. I don't know if I should admit to the fact that I've placed one of those masks on my face using a filter in TikTok before, the results weren't great. Anyway, so, there I assume there are. You know, a number of you are exploring those really important questions. I assume there are a number of, implications for one's self-esteem from using, these beauty apps, particularly ones filtered with, AI capabilities. Are you aware of any regulatory mechanisms that oversee beauty apps to help protect consumers from harmful messaging? And if so, are they effective?
[Yves] So at the moment, apparently it's one of those, technologies that are in sort of a regulatory vacuum. The probably the most directly responsible for any kind of regulation would be the duopoly of Apple Store and Google Play, which are the sources of these apps only insofar as if somebody makes a complaint, or if there are any privacy violations in Australia, it might be the Australian consumer, Competition and Consumer Commission that might be responsible for it. But a lot of their interest is about privacy, or direct harm. When you talk about psychosocial harm because of appearance based judgement, it's still not very concrete. It's or it's not seen as an immediate type of harm for us to take down something or for us to meet some form of statement, to inform a user that this might be psych, psycho, socially triggering or problematic. So at the moment, there's still not a lot of regulation to ensure that apps, will not lead to, body dissatisfaction, for example. So at the moment, it's an ongoing issue.
[Yvonne] Really interesting. Thank you. At this point, we might actually move on to our next panellists. Jasmine, would you like to, discuss some of your current research?
[Jasmine] Thanks. Yvonne. Can you see my screen? Okay. Yeah. That's great. You, really briefly talk about some of the research looking at social media and users body image. So often when we mentioned it's good. Okay. Often when we think about social media use, we think about screen time and how much time people are spending on certain platforms. But the research tends to find quite weak relationships between spending more time on social media and users body image concerns, wellbeing and mental health. What people do on social media really seems to be key when it comes to their wellbeing. And of course, if you engage in more harmful behaviours then time is important. But within that context, if a young person spent two hours on social media watching puppy videos, I wouldn't be worried about that body image. Now there are, phenomena that have been around for a long time. So there are societal beauty ideals which each is nicely just about presented on there. And also this positivity bias. People want to be seen positively by others. These are not new phenomenon that social media features and functions give people the tools to really control how they appear to others so they can play into these psychological constructs, and they can also heighten these ideals and this positivity bias. We know that, beauty ideals are very known as, user set. So there's usually one particular body type that is promoted as attractive for men and one for women. We know that these ideals can shift over time, but it's still usually one body. Now we're all different. We are born into bodies that are all different shapes and sizes. We're all unique, and if we're all trying to look the same way, then it is always going to be unattainable and unachievable for most people. To these beauty ideals and, quite widespread on social media. Which can be heightened with them beauty filters and other kind of functions. And there is actually quite a lot of research showing that viewing these visual content, so images or videos of people who match these beauty ideals can be harmful for people's body image and put them in a negative mood. And that's because people tend to compare their own appearance to the people's appearance in the images and videos, and judge themselves to be less attractive. And because people can come to internalise the ideals for themselves and believe that it's important for them to achieve those ideals, to be accepted by others and to be happy. And on top of that, touched a little bit on algorithms already, but the recommender algorithms, because they do, they are focussed primarily on keeping people engaged in their platforms. Can can be harmful in that if someone selects to to view an ideal image or video, or if they spend a little bit longer looking at a weight loss ad, then the algorithm can think that's what they're interested in and feed you more and more and more of that content. And that's where I think social media platforms can be particularly toxic, appearance based environments for people who are potentially already vulnerable for eating disorders and other mental health concerns. But it's not all bad. There's also content on social media that we believe, could actually improve people's body image. And we've been doing a lot more research in this space at the moment. So this content can contain, visual images and videos of people with diverse body shapes and sizes. It can contain natural, unedited bodies, less sexualised bodies, content that really challenges those beauty ideals and promotes the acceptance of audiences as beautiful and, and acceptable, and also content that encourages people to focus on what your body can do rather than how your body looks. And to focus on other aspects of yourself, not just your appearance. We've been doing research in the lab showing this, type of content can be positive. We're doing some research in everyday life suggesting that maybe viewing just a few of these posts amongst all the other posts that other people are looking at on social media might be enough to improve people's body image when they're on, in these platforms. So I think there's a lot more that we need to do in this space. But then I guess that the question comes up for me is, how can we, reduce the amount of idealised content and increase the amount of positive content that people are looking at on social media? And that's it for me. Stop sharing.
[Yvonne] Thank you. Jasmine. On the basis of the type of research that you have done, I'm wondering in terms of established practices, how do likes and comments on social media influence users body image? And then in terms of perhaps emerging practices, something that we're going to see more of, in coming years is how much AI influences and AI images impact users body image. Yeah. So I think, cause comments and likes, they're all part of the social media environment, which I think are really important to take into consideration. We there is research suggesting that people tend to make more positive appearance based comments. There is appearance based bullying on social media, but that's not as common. And so some suggestion that there's positive comments may actually heighten body image concerns or at least increased focus on appearance. But really, I think the main thing that's coming out is the visual aspects of social media driving most of the effects. So looking at images and videos, that's most of the effect. The comments in the likes may heighten reduce it a little bit, but the visual aspects really seem to be, what's key when it comes to body image. AI influences? I'm sure we'll talk maybe about this later. At the moment they scare me a lot. Only because from what I've seen at the moment, they seem to be incredibly idealised, incredibly sexualised. And all of the things that we're trying to reduce in, social media. My concern is the potential to actually make ideals even more unattainable. But I don't think that it's necessary that we have to all be afraid of. It's just more thinking right now. What can we do? What? Maybe laws are policies. What can we think of for the future to stop that from happening? How can we maybe live with or work with? You know, I, advertising influencers in the future to to reduce the there is real risk of harm. But how can we stop that from happening?
[Yvonne] Thank you. Now we actually have. I was going to leave audience questions until the end, but we actually have a question that perhaps Jasmine you might have an opinion on. Feel free to. We'll leave it to the end if you'd prefer, but one, audience member has asked about LinkedIn in particular. A lot of a lot of private information and images are shared on LinkedIn, but that as a type of platform isn't getting as much negative attention. Do you have any, views on why that might be the case, that that type of a platform doesn't, spark as much, fear, I guess, around the use of our personal information. Yeah. I mean, I guess that's an interesting question. I think when it comes to body image. I guess maybe LinkedIn isn't looked at as much, because it might not be as popular amongst adolescence and young adults in that sample that we tend to focus on. Not that body image isn't an issue for everybody. It is. We know that it's prevalent amongst lots of ages and genders and different body sizes and cultures. My take my. The way I tend to approach this topic is rather than thinking of the platform, is to think of the content. So we looking at an ideal image seems to have a similar effect if it's on Facebook or if it's on Instagram. We've done a recent study suggesting that maybe ideal videos have a similar effect on TikTok to images on Instagram. So I think if there was that type of content in LinkedIn, I would expect it to have a similar effect. But of course, all the platforms bring these nuances, these different functions and features which would hide and reduce effects. I think LinkedIn is something I think all of the platforms need to be investigated. I think LinkedIn, I don't know how much it would be specific to appearance, given that it's more of a, professional website as far as I know. But yeah, I think all of the intricacies of all of the websites we should be looking at and as to how it might change these effects. Thanks, Jasmine. Okay, we might be switching gears a little bit here. Turning to Michael, I know that your work is on marketing. Would you like to, expand on what you're doing in this space?
[Michael] Yes. Firstly, thank you very much. I'll just get my slides up. There we go. What we've heard so far is a lot of people, talking about all the negative things. And marketing is probably responsible for operationalising all those bad things into a profitable way. So let me preface this by saying sorry, but also money runs the world. And unfortunately that's a reality that you have to deal with. So, I'm going to run you through, some of the stuff that marketers do, but it's probably questionable as far as ethics, privacy policy and the rest of it goes. But I think it's really important that we understand the thought process, behind the technology. So I'm going to run through this is, as nicely as I can. So we've, as you've already heard. Images. And marketers really have tapped into this, and I'll run through the strategies in a second. Of creating unrealistic standards. So the LinkedIn question was an interesting one because they create LinkedIn goes ahead and create some unrealistic work, output sort of envy to a degree. So we expect everyone publishes a paper every day. We expect everyone gets an award every day. And it really feeds into setting these unrealistic standards. Now, how does that that essentially distorts our reality. That can impact everything from body image to how we feel about ourselves, feelings of inadequacy, low self-esteem, everything that, the other presenters have talked about. But I want you to think about this is this is setting the mood. Marketers do this to set the mood, to take advantage of those negative, negative feelings. Right? So once the marketers make you feel crap about yourself, they exploit that vulnerability amazingly effectively. They it's not just the algorithms now. Like the algorithms are one thing, but they've got AI an GenAi that sits behind systems that identifies exactly how long you've scrolled, how long you've stayed there, what you've looked at, how you thought and felt about it. And it does an amazing job of targeting that and reinforcing those perceptions and offering a potential solution to save the day so it first makes you feel bad, leverages the vulnerabilities, and then sells you a solution through the act of materialism. So it makes you think, hey, if I'm going to feel better, I need that. If I want to feel better about the world, I need this. So young, young children and young adults really don't understand that materialism and happiness typically don't work together to well. Market has done a fantastic job of linking materialism with happiness. Now just think about that and how that's impacted your life, because I still buy stuff hoping it will make me feel happy. Hence, I've got something waiting from Amazon on my door when I get home. So it encourages materialism. And then, as we've already touched on with the data, it normalises surveillance. The amount of times I've asked my students, are you worried about your data privacy? And I go nup, don't care, everyone knows everything about me. So marketers have done a fantastic job of normalising surveillance. Now let's take a second to digest how scary that is. Really. So how have they done that? They've executed through a variety of strategies. We've used influencer marketing, which makes us want to reach and touch the figures that we aspire to be. It, it offers exclusive offers to bypass sections of your brain that control reasoning to jump on an impulse control. It uses social, proofs like testimonials to make you feel part of the herd and feel part of a particular social group. It encourages gamification to again bypass the reasoning actions in your brain to make it feel like a game. It introduces VR and AR, to again have that playful sense of interaction, all while collecting data, pushing notifications, positioning products to so. And once you've bought those products, you then upload them yourself as part is user generated content. So the young people and consumers then become part of the problem. They actually start contributing to the beast, which is marketing. So congratulations marketing. You've slowly ruined the world, but done it in a really, really fun way, particularly if you like games.
[Yvonne] Thanks, Michael. So what? Marketing has been extremely effective, as you.
[Michael] Very.
[Yvonne] Described. Is there anything that can be done to combat some of the impacts of marketing on us as consumers, particularly when we're talking about children and young adults?
[Michael] Yes. I'm going to start by what you shouldn't do as a parent, as a guardian, as a responsible adult. And that's advocate for prohibition, because then as you advocate for prohibition, that creates an allure. And we saw that with alcohol, drugs, everything. Soon as you say to someone, you can't have that, they're going to want to. Right. Leaving children and young adults essentially in this laissez-faire I say attitude, giving them a screen and saying have fun with it. We'll see you in an hour because, you know, I want a glass of wine. That's not going to end well. You can't do that, right. And these generally stick education sort of packages where, you know, internet safety would be good. That doesn't work. The number one thing I would probably advocate, and it's something that's already been brought up here, is lobbying for regulatory change. That's a person personally, that's my number one thing. Leaving it up to large companies to regulate themselves is the most naive, stupid thing I've ever heard in my life, right? Forget about it. They're not going to do it. And making way too much money, they need another underground bunker. Regulatory is really the main way to go. And then we need to focus. And I think Katina's mentioned it before, is understanding values, building up those values in people, understanding what it like, the benefits of detaching from the screen, understanding and critically analysing what you're looking at and why it's there. The reason I got into marketing and marketing research was to understand the beast so I could counter it, and I think that sort of thinking needs to be sent out through the masses. We really need to understand what we're saying in order to discern, you know, whether it's good or bad for us, essentially. The other thing is, I know kids don't listen to parents because parents often aren't great role models when it comes to screen time themselves. But there needs to be more conversations at home, really, about what this actually means, what the consequences of these are, and how to use technology responsibly, and how do we engage with marketing more responsibly?
[Yvonne] Thanks, Michael. Now, we might actually be able to hear a little bit more of your thoughts on that. And perhaps we can even unpack the idea of, whether there's a middle, a middle place between regulation and self regulation and if there's something that, you know, somewhere where we could meet in the middle, if the law is struggling to to keep pace here, we might be able to do that by way of this next question. So I would like to actually put a question to the whole panel. And we might proceed through each panellist in the same order that we, that we just did. So and I ask this again with that pacing problem in mind, this question comes from a place of dissatisfaction with what law is doing in relation to the accelerations in the speed of innovation here. So I'm curious to hear your perspectives on, firstly, whether the concerns we've just learned about a different in any material way from those associated with traditional media or traditional marketing, which also drives beauty standards, impacts self-image and influences consumer decision making. And secondly, given our willing subjection to technological innovations, which seems to even go beyond normalisation, what can be done? And many of you have touched on this already, but what can be done to mitigate the harms or enhance the benefits of our time spent online? So, Katina, I might start with you. Just to summarise, is is there anything different here? What can be done?
[Katina] So that's a great question, Yvonne. And as I showed in my presentation, I think the concerns were there since the beginning, since people could reflect on their own image and recognise their own image. But what's changed is the storage of data, the miniaturisation and proliferation of cameras, the fact that we can turn the gaze on and then share it with the world, and perhaps do that in an impulsive mode when we're not thinking straight. And the repercussions of that will follow. I do believe there are things that we can do, and I do think organisations have a role to play in all of this. But I also think creating digital and media literacy programs beyond cyber safety programs is very relevant, taught to our children from early childhood onwards as we develop and gain responsibility and independence, as we become teenagers and early adults, how do we master the media and the digital platform and not allow it to master us? We want to exploit technology for all its benefit and not be exploited by it. We want to have some control over it ourselves. I think there are support apps and tools. I was an ambassador for antisocial.io some years ago, which demonstrated that whether a negative externalities through the deployment of technologies, we can use technology to combat that technology co diffusion. Corporate social responsibility is important and organisations being held accountable for their position and what they do with our data, how transparent they are about sharing our data, what data actually is private on dating websites, gambling websites, investor websites and so much more. It's all gamified. It all looks like a form of gambling to me, but corporates mustn't exploit our goodwill in sharing. But also we need to know what is being done with our data and we need to develop new informed consent processes for that. I think you mentioned that as well Yvonne. Better design. How do we do that? And we acknowledge the risks early on in the development of these new applications and tools. And then it's everyone's problem. It's not just parents and teachers. It's not just the children. It's all stakeholders. The third sector as a totality, not for profits. Advocacy groups, and non-government organisations that look at communications, but also government agencies and entities, also business and much more. The media even has a huge role to play here, but it is multi-stakeholder involvement. And I'll just share with the audience, some of some of the snapshots I've been involved with, with peers from you away, including Doctor Roba Abbas of the School of Business. In this one, I encourage people to watch this documentary created by Telecom New Zealand. Attitude Live is the network, which looks at disability. But in this three part series we looked at social media obsession and one of the three parts I anchored in my mind. And it takes three case studies of three women, all of different ages, in different contexts, and describes the feeling of isolation online and how they responded, and what some of the outcomes were. I encourage people, I, I do encourage people, through this documentary at the end, to go to nature to overcome some of the challenges, to go away from the digital, at least sparingly over time and to gain some control back. And what we found in studies is that people initially take selfies of themselves in front of nature, but over time they dissipate and nature becomes the awe, you know, inspiring location. The Australian Media Literacy Alliance is a new, media literacy advocacy group, and there are many more sprouting up to help us understand how to overcome those challenges that we face, and also learn about digital life and what that means. One of many apps is antisocial.io, which shows you and compares, allows you to control your social media use. Maybe, you know, on average, Australians are online for three hours per day on social media. Perhaps we want to control some of our app use if TikTok is an issue for Facebook or Instagram for us, but we can do that by comparing ourselves to different, ages or our own age group, and to see whether the behaviour is excessive or we are an outlier, depending on our particular context. It looks at battery, usage, it looks at, the number of unlocks on Samsung phones and Google phones. It also tracks how many hours you are on particular apps and how many apps you download. But this, I think, is the pinnacle of what we have to contribute. Doctor Abbas and I worked on this standard for over two years, and it was rolled out globally. It was also adopted in the EU, in the C..... Collaborative approach, where we look at and recognise children as users and what's appropriate for age old, use. So how old are you? Means what you know, you can have access to and what your rights are, looking at different conventions in international law, but also we, we emphasise here that this. Particular standard if adopted for all age groups. Young, you know, in pre-school and. Young adults and the seniors, will allow us to thrive, more broadly. And what it does is it implements an age appropriate risk register. This is a risk register that identifies risks that can be treated early on in the design process, those that can be overcome and deployed through, mitigation techniques. But it's forward looking. It's not only anticipatory, but it identifies risks early in the invention and innovation process. And I can state publicly there have been many companies that have adopted this standard. In fact, a number of nations are early adopters. Indonesia, we presented, this particular, standard to the Indonesian government last year, and they've adapted it, adopted it into the act two lines with it doesn't mention the standard explicitly, but as they go to regulation and then enforcement in the enforcement modality, you will see the framework they're embedded. So, we are gaining traction. Businesses are coming on board. It's just like when we used to say smoking was bad for you. It took organisations 30 to 40 years to figure it out. But they, they banned smoking from advertising. Now, the same thing can actually happen here when we consult different panels of stakeholders children, parents, teachers, other stakeholders in this innovation cycle, we address all known hazards and risks, and we record intentional and unintentional impacts. We think about them before we deploy service. And right now we're rushing, rushing to deploy without thinking about the implications on everyday people. We won't be able to anticipate all of these. But if we do this periodically in our design process, which is a continuous artefact, a continuous life cycle, we can nip things in the bud early on in the deployment if we've missed it at the early innovation stages, and we apply this age appropriate risk register and overcome these challenges, I think that's all I wanted to say, Yvonne, on that point. But, I think we can actually tame the beast.
[Yvonne] Think you could say that in a way. I think you have responded to an audience question there. We're one of that audience members has aptly identified that given the states, the benefit that is achieved at sorry, the benefit that is received from the state in the commodification of information and knowledge, the audience members questions, well, can we actually rely on state regulation to make a difference here? And in presenting your findings and the work that you and, Doctor Roba Abbas have done with that, IEEE standard, is my my understanding is that you, presenting a solution that I, that, I guess falls under the description of soft law. And I'm wondering if just briefly, before we move on to our other panellists here, if you could explain just generally what that is and how that might work without having to rely, on legislators getting involved in the first instance.
[Katina] And as you know too well, it's a great question. Legislation takes a long time to put into play. We need precedent. We need cases. We need case law to come up and and to inject that. Indonesia has taken a top down approach and said, we're not going to even wait for that. We know it's going on. We're going to go and do that and introduce it in a top down approach, and they've been very active at doing that. They'll have legislation, I think, in nine months in terms of regulation that will regulate this whole, act. Argentina is following suit as well, and we're going to see a number of other European players. We've just seen the introduction of the EU AI act. And I know, Yves mentioned that also. But the FRIA, the fundamental rights impact assessment for high risk AI projects, will be identified. And when we're talking about things like deepfakes and the ability to filter at the click of a button, these will have to go undergo FREA fundamental rights impact assessment in this new EU AI act. But I think, Yvonne, it's really important to state that there will be enforcement, there will be penalties in the hundreds of millions of dollars. We've already done analysis of current cases where game companies have been fined 500 million, different players in the EU, you know, something like 400 million, 300 million, you know, a business can't sustain that. We can say that they're making billions of dollars, and they are the big transnational companies that was alluded to in the comment. But if you break the law a number of times, the penalties are handed down, your brand is affected, and you can say your risk appetite is unlimited, but it's not. You don't have unlimited money. And we have seen this time and time again where share prices fall due to a breach, whether it's in customer data, whether it's in, the wrong, acts of organisations that do bounce back fairly quickly, but soon what we're going to see, I think, and we have to be positive about this, if enough players do the right thing. And I don't want to mention names here, but we have leaders in this space of digital products for children. They're going to say, that's the best in class. I'm going to follow that. Let's benchmark. So we're not saying companies are doing bad things on their processes. All companies want to have a robust design process. What we are arguing is we've done enough homework on enough multinationals and enough consultation that we have a best in class process to follow. And what we're doing is we're saying companies now reflect, certify yourselves on this standard. And soft law allows us not to have to wait for that legislation to come through its technical standards, its policies, its guidelines. It's there to move a whole industry towards a positive way. And I think we are really seeing this globally.
[Yvonne] Thank you. That was, that was a really great explanation of the role there of soft law. And then in relation to, fines and companies, you know, eventually going to listen, we actually have seen like Alphabet Inc, the parent company for Google, making changes based on, action that has occurred within the EU and in, in, in fact, meta has made changes following, different, lawsuits against, against them. So we, we, we do see positive changes, in this space occurring. Yves, I might turn to you to answer that first question. Is there anything different around particularly around, you know, beauty apps, our use of beauty apps compared to traditional media?
[Yves] Thanks, Yvonne. And that's a great question. Because as, Jasmine, as also discussed a lot of the issues that we're encountering feel like they've it's something that we've seen before, but I would argue that beauty apps are different. They're different in terms of magnitude. They're different in terms of personalisation. So if you talk about other drivers of beauty ideals, talk about legacy media, television, film, magazines, I'm not sure if people still read magazines. But a lot of these beauty ideals are implicit. They're ideals because they're the kinds of features that we see very often. But when we talk about apps, beauty apps are explicit. They would tell you exactly why you are unattractive. They will provide these scores. So these are explicit judgements that we don't see in other legacy media. Second, it's explicit when it comes to its prescriptions. So if you talk about encountering images from film or television, maybe you'd be inspired by the main character who's portrayed to be the good, you know, the hero of that character, and then you might be inspired to imitate their features or the way they dress. But when it comes to beauty apps, they're making explicit suggestions that say, in order for you to increase your score, you have to go through and take and undergo these, different types of cosmetic surgery, that kind of messaging, it's different. And then so in terms of causal link between exposure to a feature or to an app and potential for psychosocial harm, I feel that the causal link is much more immediate, personalised and definitely shorter. So you feel that perhaps they might be carrying some of the historical prejudices that we have, with existing legacy media. But the way it's being told and the way that users are interacting with that message is, for me, materially different.
[Yvonne] Thank you Yves. Jasmine, what's your take on this?
[Jasmine] Yeah, I agree. I think that there are, you know, we can certainly take. We have to take the theoretical models and the findings from traditional media research, at least if you are a body image into the social media world. But there are differences within the social media environment that can, I think, heighten the effects that maybe we've seen in other forms of media, beauty apps are one. Who's in the image, the relationship with that person, there's a lot of differences within the social media environment that I think could make, some of these effects worse. In regard to, where the ownership of change should be, I think that it's really helpful to give individuals tools and techniques to improve their own social media environment, but I really think most of the effects will come from strong regulation, with teeth. So we with, actual like big fines or consequences for inaction. I think there are a lot of platforms that are already implementing really, positive, changes to their environments. But my concern is with that regulation that these could just be trends that, you know, come and go. You can say that in other, positive things that have happened in like, say, advertising and things that are maybe shifting to, you know, that going back to where they were. And so I do think that, that we need and then, like others have said, there's a lot of discussions happening in Australia and internationally. I think we're really at the, precipice of change in this space. But I really do think that it's difficult, especially, with parents and things like that. You know, we can keep them and I think we do need to give them tools. I think open discussion is helpful. But, you know, in our research, we find that modelling is the stronger aspect that you can do. But the parents also influence, just like their children, by other marketing techniques, all of the other, you know, algorithms, everything that's within those platforms. So I think that, there are things that individuals can do, and I really think we should be pushing more, you know, media literacy and open discussions within families. But I really think that, that the focus should be on changing these environments because we haven't really seen, I think, strong effects from any individual level, interventions and that have been implemented yet.
[Yvonne] Thank you. Jasmine and and Michael, from a marketing perspective, do you?
[Michael] You're going to hate me with this one, Yvonne. I'll tell you the big difference. Traditional media. It's like a shotgun. And you hope to hit something. Now they're all snipers. Now they can drill down. Before there was three channels. Four channels in Australia. Now everyone is a channel, right? So, it's not an even playing field. People go out there that curate their own experiences, their own little space. They're not just passive adopters and consumers. They're active participants. That kicks off dopamine in the brain. You know, it's it's a completely different field to what traditional media was. It's so much scarier and it's so much more immersive. And people get a high off it. So like I watch I was I watched TV in Australia in the 1970s. It was boring and crap. Now I go online and it's very hard to turn off because it's so immersive. I can find out anything I want and marketers tap into that. They keep you engaged. They understand who you are better than you know yourself. They know where you are 95% of the time in a geographic space. They know everything about you, and they use that to their advantage to keep you connected. So it's completely different to a traditional setting. It's it's, it's far more sophisticated.
[Yvonne] And, your response there, I guess, is really highlighting the, the example that you provided, to our audience member around the regulation of smoking because I the nature of, of of our time spent online. Audience member asked, are there any examples of regulation, that has worked to reduce harmful impacts of, of marketing? And both, Michael and Katina, you have mentioned smoking as an example. What, is there anything specific that we can learn from strategies around, the way in which smoking is now marketed or not marketed that we could apply here in this context? Yeah.
[Michael] When it's not done overtly, it's done covertly. And that's the risk, right? So what happened when they banned smoking advertising. What happened? Gambling advertising went up. So did they. Did they get rid of an evil? They just replaced it. Right. So, you know, there are unintended consequences to actions when you bring regulations in that we also need to be really careful of understanding and mapping as well.
[Yvonne] And, I'll now turn to actually looking at a few more of our questions from the audience. And please, we've got a we've got a little bit of time remaining, so please feel free to, post more questions in the Q&A. If they come to mind. We do have a question around essentially why aren't individuals doing enough. So, this audience member has stated that essentially every social media platform is embedded with AI and has AI text added to the chat. So aren't we just increasing our exposure to cyber attacks and and other things by engaging on these platforms? Why are individuals taking more actions? For instance, not to put this directly to you, Michael, but you mentioned that, your students say that they just don't care.
[Michael] That they don't care. They're addicted. It's like if you're asking a drug addict, why don't you stop taking heroin? But it's it's not a reasonable, it's it's just not a reasonable request. But they need help. And I know I'm like, I'm talking about a drug addiction, but it essentially is. It's kicking the dopamine thing in their brains, gone mental that they don't care. They just want the high. They just want. They just want to be seen. They just want to be known. They just want that attention. Right. So I find it, hence I support the rest of my colleagues for saying we need regulation because you can't leave it to the individual. If you left it to the individual, would smoking rates be dropping? No, so.
[Katina] I've got some statistics to add to what Michael is saying, Yvonne. And, let's say that, in 2017, a report found that in America, people saw between 4000 and 10,000 online, offline advertisements, on sidebars, on click in games, or prior to viewing online videos on streaming services or social media platforms. That's 4000 to 10,000 daily. Now, when we compare that to the 1970s in America, people saw 500 to 600 advertisements a day, and today that number is closer to 10,000 per day. And what is perhaps most alarming is that children, on average, say, 20,032nd commercials each year. Now, when we talk about regulation and we hit on this, what's happened in the gambling arena in Australia and the pressure is mounting. Some regulation has been introduced. In fact, it was a student project that you had last year and four young men took it on. I was very, very proud of their outcomes because they all spoke about regulation. And I agree, Michael, the messaging is subliminal. It's paradoxical. So it's, you know, don't place that bet, but perhaps you should. You know, it's not said that way. But if you listen carefully, to what's being said, it's almost trips you up. Was that for or against gambling during a sporting event? And so I think that kind of advertising will be subdued over time. We know it's there. We know it's paradoxical. We know triggers. Individuals who, the gamblers, for example. And right now I can pick up my smartphone and I can place a bet anywhere in the world that I wish. And like, I can place an investment, decision on stock very quickly using these apps like Robinhood. And I can do much more, in in-game purchasing and buying, armour and other things. It's all gamified. But I just want to say that soon we'll be regulating advertising like we did in smoking. So I think there is a way. It depends on the political willpower, because taxes are also being gathered by government agencies. So we take a holistic stakeholder approach and you start to analyse what's really going on here from different perspectives. It's give and take, but we have to all be moving in the right direction. So when we talk about individual stakeholders, certainly, Michael, I agree, parents need to keep speaking openly in the their families about use and appropriate use, particularly until somebody reaches the age of adulthood and thereafter there is independence. But we do this gradually over time. You know, when children are young, we speak to them differently. But how do we kerb? And so this regulation does sit in this enormous bombardment of advertising and jasmine, I agree body images you know. And how do we become more balanced. You know, I loved what you put up, Jasmine, in terms of the different body weights and the different body types and shapes, because we're all not going to be size eight and we're all not going to look like supermodels. And what's happening online, from my analysis, is that young people perhaps will navigate to different talent platforms, different, modelling platforms for a country of Australia size 25 million. I had, analysed some years ago now, 7 or 8 years ago, that 4 million children were on, talent and different websites regarding, modelling and otherwise. That's a lot of children. When we compare the number of how many children exist in Australia. So we must be all supermodels in some way, or we must all be looking to be supermodels. But the question then becomes, when you start looking at social media platforms for modelling for talent, and people start saying things like to big supermodels that are highly ranked because people want to be ranked like them. You know, I love the way your lips are shaped, and I wish, you know, I had eyebrows like you and these outlandish statements. What's wrong with my own eyebrows? And what's wrong with my eyes? Nothing. We're all made in the image and likeness of something, you know? And all people are beautiful. Really is beautiful. That's the message we should be selling. And I agree, Jasmine, but this bombardment of advertising has to stop. That is really regulated.
[Yvonne] And we we we do see the ACCC paying attention to this to this matter and have been recommendations to even strengthen those protections that actually do exist under the Australian Consumer law. To go beyond perhaps just protecting against misleading or deceptive conduct, but to actually try to, prohibit it have, you know, fines, big fines attached to these convert messages and manipulation of user behaviour by labelling them as unfair data practices. And that's, you know, that's potentially one way that we could, perhaps change the nature of, of marketing.
[Michael] How do you I'm just really curious because as a like as a marketer, I've studied the history of marketing since Egyptian times and how persuasion techniques were used throughout history. You can't kill it. But I'm telling you now, it's it's it's good, like as in, it just adapts. As soon as you got left. It goes right. Soon as you go up, it goes down. So I'm really curious, and I'm looking at this. Not. I'm not answering anything. I'm just genuinely curious to see the evolution of regulation if it does happen. Because I'm still very mindful that this powerful lobbists sitting behind the scenes that are going to be whispering in the politicians other. But I'm really interesting to see how they do get on top of it because, I look with great interest. I'm really fascinated by how it's going to evolve.
[Yvonne] On the topic of balance, how we do have a question from the audience around the fact that there are so many prominent voices in this field that are actually advocating for the increasing adoption of digital technologies, AI enhance technology to enhance society. Whether it be, I'm thinking in the medical sphere to improve medical diagnosis, you know, that's seen as something that could actually be positive for all. But what do you we focus on a lot of the negatives. What do you see as the balance, between some of the positive uses and potentialities of this technology and the negative uses and potentialities of this technology? Does anybody on the panel have any thoughts on that? Yves?
[Yves] Thinks that that's a really important question, and that really motivates a lot of the work that we're doing. At our research centre, we're looking at the use of ethical, sorry, the use of artificial intelligence in health care as well. Looking at the ethical, legal and social implications. And one of the things that I find that's really helpful in trying to balance the benefits and the potential harms would obviously be interdisciplinary work. So making sure that different experts in different, sectors and different disciplines are working together. Some disciplines pay attention to different priorities, and we know that. So I think, I'm quite hopeful because a lot of experts from the AI communities are now collaborating with experts, from bioethics, from ethics, from, social science, communities as well, to look into the potential implications that they don't really pay attention to. And it's not because they don't want to. It's not just within their, field of expertise. So I think there is a lot of improvement there and ongoing conversation, obviously with consumers, with individuals, and telling them this story that there are benefits, but there are also potential harms, and making sure that they're aware of these harms and giving them the opportunity to share their views on how to go about, balancing, the harms and the benefits. I wouldn't necessarily want to say there's only one way to do that, because I think there are different ways to do it, but I want to highlight that consumers and citizens should have the opportunity to have their say on how to approach that.
[Yvonne] And I think that's a very good takeaway message around the need for multidisciplinary conversations in order to tackle this issue. Jasmine, I didn't want to cut you off there. What was your what were your thoughts here? Yeah, I think that, it's interesting, like, you know, when it comes to algorithms, there are people, advocating to have the recommender algorithms removed from, social media platforms. Some kind of, I'm in Europe at the moment, and so there's lots of change happening here. And some researchers are saying that, you know, when they're experimenting with that, young people really don't like using platforms. If there's no, recommender algorithm, content is boring to them because it's not enforced. But I wonder if we can use these technologies to, to get more positive and balanced content. So maybe it's training these algorithms to, promote, you know, for body image, at least more diverse appearances, more analytic, more natural appearances. We can have regulations around, you know, what should be expected of those, algorithms that maybe it's a more so using AI, but to, to ensure that there is, a balance there in regard to harms and what is beneficial. I also think that we could, you know, regulate advertisements, just not necessarily just within social media, but just broadly about what is expected to be in advertisements that wouldn't stop advertisements from being out there. That would limit, you know, what is, how they're portrayed and what kind of harms that they could be, having. Yves, I saw that you were wanting to to pop in there?
[Yves] No, I was just going to say, I remember there was a law being entertained in France before where if an image was photoshopped or edited, that it has to be stated on the image. So whether it's magazine, it's a magazine, advertisement or whether it's on TV. So anytime there's any kind of editing that has to be disclosed.
[Jasmine] So, l completely ineffective. We've done a little bit of work on that, it makes no difference. You can't make an ideal image less harmful. You just have to try to regulate it, so they're not the most they're not here on people's feeds. Yeah.
[Yves] I think this.
[Katina] I think Yvonne as well, we can think about ways that I might protect people. Although we're very early in the stages of development, it doesn't always work. For example, I remember being involved in one project, that looked at, too much skin being revealed that by an underage user and then perhaps using real ID or some kind of age assurance so we know who's communicating, what kind of detail between whom. That then also encroaches on privacy, potentially, and also one's right for autonomy. There are a lot of different perspectives on what children should and shouldn't be able to do. So I don't hold, the governing voice on that, so I won't say much, but just to say, we have tested, all over the world, the potential for AI texts to help us in being less discriminatory, to remove hate speech. And those filters don't always work, because text is very difficult to, interpret sometimes. The English language, for example, is open to different nuance. And so you could be writing something innocently, but the AI might interpret it and block it. And then we have freedom of speech issues, censorship issues and much more. Also, you could be sending a picture of a baptism, for example, or sacraments or some holy event, and it's blocked because of nakedness. But you actually, you know, sharing something with your family network. But we are getting closer to, I think, how we can do less about waiting for after the fact. We've seen some heinous crimes committed live from first point, first point of view shooter perspectives and, holding those social media companies accountable for the proliferation of that video across the network, particularly a spate of these horrifying events, between 2015- 2018, where it became a thing to do, copycat sort of attacks. So we'd rather block it before it goes live than after the fact, because it's very hard to put the genie back into the bottle after something has proliferated on various social media platforms and then, of course, underground networks, like 4Chan and other things that are very hard, to criminally, police. And in addition to that, we now have companies, large companies actually using subjects, us as humans and users as data annotators. And we don't realise we're doing this, but we're annotating live and interacting with an AI client. And it seems kind of innocent children are particularly the target of this. We've written about it recently, and editorials coming out in, the June issue of Transactions on Technology in Society, where children become the data subjects. So they are being watched, but also helping the companies better their AI algorithms by the responses through the client, the AI client. So it's tricky. Where can Ai support us? And increasingly I'm saying on LinkedIn, even people saying stop using the recommended AI response text because we shouldn't be training the AI on LinkedIn, or we shouldn't be training the AI or what have you. But children on various, platforms that they use to communicate with one another are being asked, is this picture a banana? And of course, it's not a banana, but they're always asked, is it a banana? And then the child says, no, it's a chair or no, I mean my mum's car or I'm in this location and the questions start to get deeper, you know, which city, which suburb, when is the beach facing, which direction? And, you know, children are interacting with this and thinking, can I delete the client? And they can't. So on the one hand, I do see benefits where Ai can support us in better positive messaging and imaging. I do think some organisations will use these filters once they've developed and we've refined them, but at the same time, we can't use our subscribers as data subjects for free annotation, free labour, you know, that's kind of, criminal in its own way. But, I do think I will be there in the solution in the long term.
[Yvonne] I think you've so neatly potato captured the complexity around this. When when when? Illustrating that to train an AI to do better is to actually invade. Continue to invade personal privacy in a way. And to train an AI to be less discriminatory means holding more information about the characteristics of an individual, which continues to perpetuate some of the problems we're concerned with. And so there's a path for moving forward here. But the that path is looking a little bit bumpy and complex and and tangled up at the moment. Still now we are about to, reach the 1.5 hour mark for our sessions. So I wanted to, close by asking the panellists if you have any final remarks you'd like to make, anything that any message you would like to depart to our audience? On tonight's topic, perhaps I'll go around the screen in the order that I can see, panellists. So Yves, you're first up on my screen. Any final remarks?
[Yves] Thank you so much, everyone. And again, thanks everyone for attending. I think just shortly I just one and I just hope that policymakers, regulators, any decision maker should really pay attention to body image as an issue and as a direct harm that, and it should be taken seriously. And yeah, so support research. We're trying to explore this space. Thank you.
[Yvonne] Thanks, Yves. Jasmine.
[Jasmine] Yeah. It seems like, regulation seems to be a common theme that's coming up, and everyone seems to be advocating it. I think, you know, it's important to get everybody behind and, and, as far as I'm aware, I think the government have recently talked about, adopting the online safety administrator, and it's open for comment. So I think as many people as possible, we can advocate for, for more regulation, maybe via the Online Safety App. Yvonne, you have more knowledge about that than me, but I think that this is the time that, you know, maybe it's best if we do feel strongly about making these changes to really be loud and make an impact. There's been a great uptake in relation to the amount of submissions being received by the various different, bodies that, that, have been holding these inquiries in recent years. Again, it becomes a question of a gap potentially between inquiries, recommendations and action. But there are certainly many voices that, that are, that are engaging in this conversation. I've been muted myself. Michael. Any parting words?
[Michael] Yes. Just make small changes in your life. Have dinner as a family. Put the phone down. Go for a walk. You know, sometimes it's the small changes that have the largest ripple effects.
[Yvonne] Wise words and Katina.
[Katina] Yvonne, I pretty much concur with, Michael there in the rest of the panel on their suggestions. What a wonderful, robust, report that was. I just want to identified that, Google reported in 2015, we took 24 billion selfies. And ten years on, we're at 34 billion selfies annually. That's an average person taking more than 450 selfies, annually. And millennials will take an average of 25,700 selfies in their lifetime. Why? Why are we doing this? And, go back to the stories that I told in my presentation. And I would like people to think about spiritual things, which are very, close to natural things and to look inside, and to remember the physical and to, be more in tune with the space around us. And also if people, participating online studies have shown us participation means most likely they're not observing. It's observing. That seems to be the issue. So when people just observe when they're online on different platforms, they're usually more prone to depression or mood swings and mental health issues. If we participate and not just observe and compare ourselves to others, and not just body image, but how much people are earning, where they're going in transit, whether they have the Fantastic Five family with a dog in the house and the, you know, the fast car. We've got to move away from that. And I think go back to away from the techno myths and more towards the God myth, which is more about looking at inward reflection and actually seeing how that manifests outwardly. So, Yvonne, just in closing from the team, we'd like to thank you for your expert, panel moderation of us and helping us get through the 1.5 hours.
[Yvonne] Thank you so much Katina. And a big thank you to the entire panel - Katina, Yves, Jasmine and Michael for joining us tonight. It's been a really enjoyable and robust conversation. For the audience, this event was recorded so everyone who registered will receive a link to the recording via email. And also just a note that our next Luminary Luminaries panel will take place next month. It is on the topic of Empowered Beyond bad bosses: fostering workplace confidence and leadership. And following that will be a session on, about Hepatitis C. I have been informed. So thank you everybody, and have a good evening.