Great piece ☺️ My company’s mission, ethos, and operational philosophy centers around ensuring that humanity flourishes within and around intelligent systems. We do this through our Seven Principles of Co-Intelligence, where AI knows when to pause, redirect, and escalate to its human user. A Co-Intelligent future is non negotiable for AI to be able to scale in the way the world wants! We need safer tools with guardrails and consent gates, and we need to educate humans on how to use the technology more thoroughly. I’ll link to our Principles framework and if you’re interested in becoming a Certified AI Ethical Strategist, we’re enrolling now for our upcoming cohort 9/22. Let me know if you have questions, I’d be happy to share more!
Thank you very much for reading! I agree that we need guardrails, and I think it's wonderful that you're making a training ground for people to develop more principled and people friendly AI models!
So well written and introduced. You voiced out so many of my concerns as well. I can understand why people use AI to make their lives easier, but at the same time I don’t understand why they would want some foreign entity to take away their chance of creating thoughts and ideas, of choosing freely, of making mistakes... Thank you for reminding me of both sides
Thank you for reading and replying! Yeah, there was so much to unpack. And even after posting there's so much I'm thinking about now that factors into how I feel. I'm glad I could write something that spoke to you in some way 😁
Great piece ☺️ My company’s mission, ethos, and operational philosophy centers around ensuring that humanity flourishes within and around intelligent systems. We do this through our Seven Principles of Co-Intelligence, where AI knows when to pause, redirect, and escalate to its human user. A Co-Intelligent future is non negotiable for AI to be able to scale in the way the world wants! We need safer tools with guardrails and consent gates, and we need to educate humans on how to use the technology more thoroughly. I’ll link to our Principles framework and if you’re interested in becoming a Certified AI Ethical Strategist, we’re enrolling now for our upcoming cohort 9/22. Let me know if you have questions, I’d be happy to share more!
Thank you very much for reading! I agree that we need guardrails, and I think it's wonderful that you're making a training ground for people to develop more principled and people friendly AI models!
https://open.substack.com/pub/thebaihumanblueprint/p/principles-of-co-intelligent-ai-a?r=5g8s6e&utm_medium=ios
So well written and introduced. You voiced out so many of my concerns as well. I can understand why people use AI to make their lives easier, but at the same time I don’t understand why they would want some foreign entity to take away their chance of creating thoughts and ideas, of choosing freely, of making mistakes... Thank you for reminding me of both sides
Thank you for reading and replying! Yeah, there was so much to unpack. And even after posting there's so much I'm thinking about now that factors into how I feel. I'm glad I could write something that spoke to you in some way 😁
Already subscribed, friend! Can't wait to see what you create on here