tendersglobal.net
Position description
In this role, you will examine technical details for developers and computer science researchers, elevate the most essential points for everyday users, and strike a balance in between for the world’s policy leaders.
At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A diverse team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.
The US base salary range for this full-time position is $102,000-$150,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Responsibilities
- Be a thought leader for AI Transparency best practices by identifying and analyzing responsible AI documentation, public reporting of generative AI model development and product deployment trends, and key AI safety policy issues.
- Lead informed discussions with cross-functional stakeholders on emerging AI regulations and its implications on foundation and frontier model research and development at Google.
- Manage content operations and drive escalations of sensitive government requests on AI-specific issues by liaising with Product, Public Policy, Communications, and Legal teams.
- Develop and launch scalable guidelines for AI model, data, system cards, technical reports, and other AI transparency artifacts.
- Review or be exposed to sensitive or graphic content as part of core role.
Minimum qualifications:
-
Bachelor’s degree in Computer Science, Communications, Journalism, International Relations, or equivalent practical experience.
- 4 years of experience in policy, legal operations, regulatory environment, editorial operations, or content moderation in similar industries (e.g., technology, intellectual property) within artificial intelligence.
- Experience with artificial intelligence research, development, or product or public policy.
Preferred qualifications:
- Experience with AI product research and development, AI public policy, and/or AI product policy.
- Experience in technology industry business areas such as operations, data analysis, and internet/online media specifically related to ML and AI applications and/or product or public policy.
- Demonstrated knowledge of the technology sector, its trends, and key policy issues affecting the internet and AI.
- Proven independent project management and communication skills, especially with cross-functional partners.
- Enthusiastic, optimistic problem solver with a track record of executing high-level analysis to drive strategic development.
Application instructions
Please be sure to indicate you saw this position on tendersglobal.net
Apply Now
To help us track our recruitment effort, please indicate in your cover/motivation letter where (tendersglobal.net) you saw this job posting.