In recent weeks, your social media feeds have likely been buzzing with a new trend: friends, celebrities, and even your favorite influencers suddenly have stunning, AI-generated profile pictures, or are sharing the exact text prompt they used to create them. That viral picture of your friend as a 1980s anime hero, or the hyper-realistic photo of a pet wearing a tiny crown? Many of those are thanks to new features in tools like Gemini, which let you simply tell the AI what to do, and it instantly delivers.
But this fun, accessible picture-editing power is precisely why AI policy is so important. When anyone can create a perfect, realistic fake of any photo, instantly and effortlessly, the internet is increasingly becoming a place where we can no longer separate truth from lies. We need clear rules and robust safeguards, not just for our picture feeds, but for sectors where the stakes are much higher, like health care and education.
Our team of advisors, alongside with colleagues at Think Policy, supported the joint initiative between the Ministry of Communication and Digital Affairs and the British Embassy Jakarta in conducting a nationwide AI Policy Dialogue to dive into this issue. We facilitated cross-sectoral conversations, synthesized ideas, and made sure voices from the field inform the key decision makers.
After months of dialogues with over 100 stakeholders across Indonesia, Singapore, and the UK, one thing became crystal clear:
💡 The best way to embrace AI? Build it to solve real problems.
Yes, that might sound obvious, until you’re in the middle of 40 different AI policy papers and various international benchmarking studies, realizing there’s no single “best practice” to follow yet.
That’s why, in the recently launched AI Policy Dialogue Country Report, we decided to listen more than prescribe. We captured the voices of those who are already doing the work, startups, civil servants, teachers, doctors, developers, across six key sectors namely: e-commerce, finance, health, education, creative economy, and sustainability.
From these conversations, three big ideas emerged and no, they didn’t come from ChatGPT.
🔍 Regulate the impact, not the tech
Stakeholders worry that over-regulating AI itself could strangle innovation before it blooms. Instead, we should focus on its impact such as ethics, consumer protection, safety and weave those concerns into existing laws and guidelines.
⚙️ Build more use cases, faster
You don’t learn to ride a bike by reading traffic laws or the theory of bike riding. You learn by riding, preferably with training wheels and supervised by adults.
That’s why many stakeholders proposed a sandbox approach in stimulating the AI ecosystem: a safe space where innovators and regulators can co-create solutions that actually work. Add a matchmaking system into the mix, be it financial support or data from both public and private institutions, and we’re talking real traction.
📚 Reskill everyone. Seriously.
Even jobs once seen as “future-proof”, like programmers, are now being reshaped by AI.
This isn’t just about new skills. It’s about preserving social mobility in an AI-powered economy.
Kids need to learn how AI works (and how not to be fooled by it).
University students need to know how to use AI in their study responsibly and embrace it to empower their skills.
Workers need access to lifelong learning, because AI won’t wait for your next training budget cycle and some of the skills in the market can be completely replaced by AI.
The Indonesia AI Policy Dialogue Country Report is a result of collaboration between the Ministry of Communication and Digital Affairs and the British Embassy Jakarta and as one step in the broader effort to provide inputs for the ongoing Indonesia’s National AI Roadmap development.
Of course, this is just the beginning. Some questions remain unanswered.
Should Indonesia build its own sovereign AI?
Who should regulate it further?
How do we fund it all?
Well, that’s a conversation worth having. Drop your comments below or reach us out at pras@amana.id.