Paul Bratcher’s Post

View profile for Paul Bratcher, graphic

AI Expert | Transformation | Strategy | Digital | Futurist | CAIO | Startup Co-founder | Keynote Speaker | Make better not just faster

AI Optimism | AI Pragmatsim | AI Doomers Change and impact is going to be both incredibly wide ranging and uniquely personal. It always has been. Rather than trying to pick a side, let’s choose to consider everyone’s unique viewpoints and impact. If we are looking for ‘alignment of super intelligence’ as a goal, (be that a good or a bad thing) perhaps we should spend more time in trying to understand the question at a human alignment level. Otherwise what can ‘it’ or ‘us’ learn from. Here is my ‘process’ Understand the outcome desired Reflect on the impact of that outcome Reflect on that versus my personal values and compass Assuming I want to do it.. Do it as well as will satisfy me My overall framework questions for each stage Does this make it better, as opposed to faster Afterwards will the people impacted feel better or worse Is it worth doing well. My thoughts for a Sunday morning How do you decide ? How would you align intelligence and thought ? Should we even? P

To view or add a comment, sign in

Explore topics