Sign in to view Dhiviyappriya’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Irving, Texas, United States
Contact Info
Sign in to view Dhiviyappriya’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
38 followers
32 connections
Sign in to view Dhiviyappriya’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Dhiviyappriya
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Dhiviyappriya
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Sign in to view Dhiviyappriya’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Experience & Education
-
Wipro Limited
****** ******* *************
View Dhiviyappriya’s full experience
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View Dhiviyappriya’s full profile
Sign in
Stay updated on your professional world
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Other similar profiles
-
Neha Patel
Entry Level Professional
North Richland Hills, TXConnect -
Davuluri Dav
Actively looking for Junior/Entry level positions
Montgomery, ALConnect -
Hiral Shah
Actively looking for entry level qa job
Halethorpe, MDConnect -
Mounika Rao
Actively looking for an entry level position
North Billerica, MAConnect -
Sangeetha Ashokkumar
Actively looking for PeopleSoft /entry level IT jobs in Boise area. Currently holding valid US work authorization.
Boise, IDConnect -
ANJALY KUMARI
Actively looking for Entry Level Project Coordinator Roles
Seattle, WAConnect -
Pallavi Pol
Actively looking for entry level jobs in finance
San Diego, CAConnect -
Deva Thobha
entry level software developers
PorbandarConnect -
Tejo Rekha
Experienced Manual and Automation Tester
Hopkins, MNConnect -
Kaniz Syeda
Actively looking for entry level job in Data Analytics, Tech Support, Database or Software Development field. I am available to work for any US employer without sponsorship.
Renton, WAConnect -
Hazera Begum
Actively Looking for Entry-level Positions in Transportation Planning/Engineering
Austin, TXConnect -
Uma S
Pursuing Masters In Data Science - Actively looking for entry level Data Analyst jobs
San Francisco Bay AreaConnect -
Seema Morbad
Currently looking for Entry Level Software Engineer, Big Data Developer, Data Analyst
Rochester, MIConnect -
Sai Vemali
HR Associate | Recruitment, Onboarding, Benefits Administration | Skilled in HRIS, Employee Relations, Compliance | Seeking Opportunities
Plano, TXConnect -
Pratibha Kashyap
- Actively seeking entry level positions in the Electronics and Communication Industry - Graduate Student at BITS Edu Campus
United StatesConnect -
Prachi Patil
Seeking Entry Level Opportunity
Troy, MIConnect -
Jyostna Thanjavur
Information Technology and Systems at The University of Texas at Dallas
McKinney, TXConnect -
Srinath Gopinathan
Bryn Mawr, PAConnect -
Vivekananda Reddy
Java Developer at Wipro
McKinney, TXConnect -
Tin Chan
Plano, TXConnect
Explore more posts
-
Mansi Sharma
🚀Job Opportunity🚀 🔍Job Title: DevOps Engineer 📍Location: Bedminster, NJ – Onsite Job type: Contract Duration: Long Term 📧Please share your updated Resume on mansi@teknobit.com Must have skills: RedHat OpenShift Container Platform(Operators, OpenShift Data Foundation, OpenShift Pipelines, Image Registry) Kubernetes, Docker, Podman, Elasticsearch/PostgreSQL-DB Administration, Jenkins, GitLab, Kafka, A10 Networks Load Balancing 📌Job Summary: We are looking for a candidate who can perform multifold responsibilities of managing On-Premise Redhat OpenShift Container Platform infrastructure along with DevOps. Must have 5+ years of expertise in provisioning, managing and deploying containerized applications using Redhat OpenShift Container Platform infrastructure 📌Job Description: Experience in creating, configuring, and Maintaining RedHat OpenShift Container Platform in On-Premises BareMetal and VMWARE environments. This includes setting up the underlying infrastructure, adding or removing nodes, and ensuring the cluster is highly available Manage and administer the RedHat OpenShift Container Platform services like Operators, OpenShift Data Foundation, Pipelines, Image Registry, OpenShift Serverless, Routes, S3 Compatible Storage etc Collaborate with the Development team in containerizing the applications also in addressing the vulnerabilities identified in the Static/Dynamic scan & container registry scans Setting up authentication methods (like LDAP etc), defining roles and permissions using Role-Based Access Control (RBAC), and creating and managing user accounts Configure and manage networking within the cluster such as set up service definitions, configuring networking policies, and ensuring that applications can communicate with each other both within and outside the cluster Configure and manage OpenShift Container Storage (OCS)/OpenShift Data Foundation (ODF) for stateful workloads and data services across the platform. This includes setting up persistent storage, storage classes, and managing the lifecycle of storage volumes Manage and administer the On-Premise Elasticsearch clusters that includes deployment, configuration, index management, Cross-cluster replication, Snapshot/restore, agent management, cluster upgrades, Index Templates . Secure access to the Elasticsearch cluster, its data and APIs according to required access control policies and troubleshoot various issues with the clusters Continuously evaluate various parts of the platform for opportunities for security hardening and improvements in observability Evaluate new technologies and industry trends, develop proofs-of-concept, and present findings Infrastructure as Code development and deployments 📌Any of the following Certifications are a big plus. Certified Kubernetes Administrator (CKA) Red Hat Certified OpenShift Administrator #DevOpsJobs #Hiring #OpenShift #Kubernetes #BedminsterJobs #TechJobs #W2Opportunity
6
4 Comments -
Shyam M
Spark dev with production support and CICD skills are very imp Charlotte, NC Hybrid Must shyam@hanstaffing.com Prod Support, SPARK, Helm, Prometheus only. No development experience. admin, ops side versus the usage of spark side. Spark Cluster Management, List of issue faced in Prod Spark Incident management, RCA, Issue Monitoring, Different tools used to monitor issue etc
1 Comment -
Akansha Agrawal
#JOB OPPORTUNITY 💼🚀 Senior Backend Developer Location- Cincinnati, OH, Job Duration - 6 Months Please share your updated resume on akansha@teknobit.com Skills/Experience: Microservices (MUST-HAVE!!!!) Experience working with high-volume applications. Both GO and Java experience. Note: Candidates must be able to collaborate with offshore teams. Project: Backend development on a customer platform/profile. Will also provide production support. Interview: First round: 30-minute screening with Tech lead/HM. Second Round: Panel interview with technical screening. Pre-screen: 5 questions, a game and a coding challenge. Highly Preferred: Need candidates who has worked at the enterprise-level companies. Preferably retail or e-commerce domain experience. Job Description: The Developer is responsible for leading the design, development, testing, debugging, maintaining and documenting software components to which they are assigned. The Developer will be involved in the technical design process and completes estimates and work plans for design, development, implementation, and rollout tasks. The Developer also communicates with the appropriate teams to ensure that assignments are delivered with the highest of quality and in accordance to standards. The Developer strives to continuously improve the software delivery processes and practices. Specific skillsets: * Senior Backend services developer * Experience with high throughput, distributed multi-region applications and databases * Backend REST microservices development experience * Deploying and maintaining microservices to Kubernetes * Expertise in Go language and/or Java (preferably Go) * Cloud expertise (preferably Azure) * Experience with distributed SQL and NoSQL databases * Expertise with CI/CD workflows (preferably Github) * Ability to work with remote offshore teams Key Responsibilities • Lead and participate in the design and implementation of large and/or architecturally significant applications. • Champion company standards and best practices. Work to continuously improve software delivery processes and practices. • Build partnerships across the application, business and infrastructure teams. • Develop programming specifications. Design, code and unit test application code using Software Development Life Cycle (SDLC) best practices. • Complete estimates and work plans independently as appropriate for design, development, implementation and rollout tasks. • Create technical system documentation and ensure that this documentation remains current throughout all phases of the SDLC. • Participate in all phases of system testing. • Communicate with the appropriate teams to ensure that assignments are managed appropriately and that completed assignments are of the highest quality. #Backend #Developer #OH #hybrid #hiringalert #gethired #jobopening #jobfair #recruiting #hiring #joinourteam #opentowork #jobsearch #hireme #jobhunt #jobseeker
7
5 Comments -
G Rajini
HI, Greetings from Kaizen Soft Solutions LLC , Role : AWS Infra + Dev-ops Engineer Location : Raleigh , NC ( Day 1 onsite) Employment : Contract Must have :- Key skills- AWS ECS, EKS, ECR,AWS WAF, Agro CD , GitHUB,Jenkins,AWS HA. AWS implementation 7-10 years of overall experience and having an experience of managing configuring cloud implementation project. AWS Infra Terraform experience is must for building the infrastructure. AWS ECS, EKS, ECR Hands on experience is required. AWS FIPS endpoints experience/knowledge with hands on implementation. Build, configure, and manage cloud compute and data storage infrastructure for multiple instances of AWS Platform. Build and deploy VPCs, security groups, and user access to our various public cloud systems and services through terraform. Develop processes and procedures for using cloud-based infrastructures, including, access key rotation, disaster recovery, and building new services. Develop scripts and workflows to manage cloud computing systems. Hands-on experience running cloud infrastructure in public clouds like AWS. Strong organization skills as you will manage costs and system access to AWS deployments. Cloud DB Management Network expertise including routing, IP addresses and subnets, firewalls, load balancers, IP SecureVPN, etc. DevOps scripting (YAML, terraform & Python) Worked with CI and CD tools (CodeBuild, Code Pipeline) and Source Repositories (Code Commit) Landing Zones, CloudFront, Firewall, Routes, VPNs, NACL, IGW, NAT, VPN, Direct Connect, VPC Peering etc) Hands-on experience with centralized logging and monitoring solutions for CloudTrail, Config, Guard Duty, Security HUB, Landing Zone, Lambda, ECS and EKS, WAF, Migration Hub, and SEIM. • Mandatory experience in Selfhosted Jenkins deployment server for CI/CD Jenkins (for build Jenkins pipeline for deploy infrastructure, EKS clusters and DB's and Ec2's ) Argo CD (who can build, automate, install and configure Argo CD with EKS cluster (Continuous deployment) on aws in each environment to scale/deploy application code on EKS cluster using Argo CD deployments and integrate with Jenkins. Using github/gitlab as a source code version for versioning Hands on experience on implementation with AWS WAF and create automation through terraform/ enable on load balancer. Experience in hands on implementation with AWS HA proxy and create automation through terraform and used on load balancer. Should have experience on SIEM tools to integrate with monitoring for required AWS services for gathering security information and event management task to make sure the platform is secure. Regards , Rajini rajini@kzsoftsolutions.com (813)568-1275
4
4 Comments -
Shyam Gajjelli
571-639-3020 || shyam@sapphiresoftwaresolutions.com Hiring Database Administrator with PostgreSQL || Hybrid (Fort Worth, TX) Role: Database Administrator with PostgreSQL Location: Fort Worth, TX Duration: 1-year In PostgreSQL, a Database Administrator (DBA) is responsible for the installation, configuration, maintenance, and overall management of the PostgreSQL database system. The roles and responsibilities of a PostgreSQL DBA can be categorized into several key areas: Installation and Configuration: ** Install PostgreSQL on various platforms. ** Configure PostgreSQL to meet specific performance and security requirements. ** Set up and manage configuration files like `postgresql.conf` and `pg_hba.conf`. Database Design and Development: ** Design and create databases and schemas. ** Create and manage database objects such as tables, indexes, views, and sequences. ** Implement data integrity and normalization. User Management and Security: ** Create and manage database users and roles. ** Define and enforce database security policies. ** Manage access controls using GRANT and REVOKE statements. ** Implement SSL/TLS for secure connections. Backup and Recovery: ** Develop and implement backup strategies. ** Perform regular backups of databases. ** Test and execute recovery procedures. ** Use tools like `pg_dump`, `pg_restore`, and `pg_basebackup`. Performance Tuning and Optimization: ** Monitor database performance and resource utilization. ** Identify and resolve performance bottlenecks. ** Optimize queries and database structures. ** Use tools like `EXPLAIN`, `VACUUM`, and `ANALYZE`. Maintenance and Upgrades: ** Perform routine maintenance tasks such as vacuuming, analyzing, and reindexing. ** Plan and execute database upgrades and patching. ** Monitor and manage database replication and failover strategies. Monitoring and Troubleshooting: ** Monitor database health and respond to alerts. ** Troubleshoot database issues and errors. ** Use logging and monitoring tools like `pg_stat_activity`, `pg_stat_replication`, and third-party monitoring solutions. Data Migration and Replication: ** Plan and execute data migrations between PostgreSQL instances or from other database systems. ** Set up and manage replication (e.g., streaming replication, logical replication). ** Ensure data consistency and availability. Compliance and Documentation: ** Ensure compliance with relevant data protection regulations. ** Maintain thorough documentation of database environments, procedures, and configurations. ** Keep track of database changes and maintain version control. A PostgreSQL DBA needs to have a solid understanding of SQL, database design principles, and system administration skills, as well as familiarity with the specific features and tools provided by PostgreSQL. Naresh Kumar Bhimanatini
3
2 Comments -
Srinivas rao
Job Title: Cloud Architect with SRE Experience Location: : Memphis, TN – Hybrid Duration:- 6 Months Contract Job Description: · The ideal candidate will possess comprehensive expertise in cloud architecture and site reliability engineering. · They should have a solid foundation in designing scalable and resilient cloud solutions and hands-on experience in migrating and modernizing applications to enhance operational efficiency and cloud agility. Cloud Architecture Design: · Develop and implement cloud architecture solutions that align with business objectives and support scalability, availability, and performance requirements. · Design resilient, fault-tolerant systems that leverage cloud-native technologies and best practices. · Site Reliability Engineering (SRE): · Implement SRE principles and practices to ensure the reliability, availability, and performance of cloud-based services. · Define Service Level Objectives (SLOs), Service Level Indicators (SLIs), and Error Budgets to measure and manage service reliability. · Cloud Migration and Modernization: · Lead the migration of legacy applications to cloud platforms, ensuring seamless integration and minimal disruption to operations. · Modernize existing applications by refactoring, containerizing, or adopting serverless architectures for improved scalability and efficiency. · Infrastructure as Code (IaC): · Utilize Infrastructure as Code (IaC) tools such as Terraform or CloudFormation to automate the provisioning and management of cloud infrastructure. · Implement configuration management and version control practices to ensure consistency and traceability of infrastructure changes. · Continuous Integration/Continuous Deployment (CI/CD): · Establish CI/CD pipelines for automated build, test, and deployment processes, enabling rapid and reliable software delivery. · Integrate monitoring, logging, and alerting mechanisms into CI/CD pipelines to detect and respond to issues proactively. · Performance Optimization: · Identify performance bottlenecks and optimize cloud resources to improve application performance and reduce costs. · Implement caching strategies, load balancing, and auto-scaling mechanisms to optimize resource utilization and enhance user experience. · Security and Compliance: · Implement security best practices and compliance controls to protect cloud-based systems and data. Conduct regular security assessments and audits to identify and mitigate potential risks and vulnerabilities Email: srinivas@kynite.io
-
Anurag K.
Please share resume to anurag.kumar@tanishasystems.com Tanisha Systems, Inc Role: Azure Security Architect Location: NYC, NY (Onsite) Hire Type: Both FTE & Contract Who are we looking for? The Azure Security Architect is responsible for ensuring that the design and security of Azure IaaS/PaaS/SaaS, Hybrid infrastructure and applications meets company’s, legal and regulatory security, and compliance standards. Position Summary: Hands-on Working knowledge of Azure-native technology stack like Azure AD, Defender for cloud (CSPM) & Identity, PIM, Conditional Access, Defender for Identity, AIP, Azure Log Analytics, Azure Monitoring, Azure Key Vault, Hands-on Working knowledge of Microsoft 365 security suite (Defender for Endpoint, Information Protection, Purview, DLP, EOP, Intune (MDM/MAM) and Defender for cloud Apps). Experience in designing, implementing, and delivering security for cloud native, distributed computing and architectural solutions with a principle of “Secure by Design”. Identify and deliver appropriate controls based on industry standards to drive Azure cloud and customer security solutions framework based on business risk and cloud native threats Design, develop and implement security architectures and designs for Azure cloud and cloud/hybrid-based systems with a minimal degree of risk to the organization Collaborate with Business and IT Operations personnel to implement group level security plans, policies, and procedures. Expertise in performing Vulnerability Management, Threat Modelling, generating security architectural requirements to SDLC and product teams. Experience with assessment, development, implementation, optimization, and documentation of a comprehensive and broad set of security technologies and processes (SDLC) (Application Security), data protection, cryptography, key management, identity and access management (IAM), network security) within SaaS, IaaS, PaaS, and other Azure cloud environments Hands-on Working knowledge of Azure-native/cloud-friendly authentication mechanisms (OAuth, OpenID, etc) and experience with IAM, MFA, PIM/PAM, SSO for cloud and custom IDPs Experience in defining Security Zones in Azure environment, Azure Landing Zone, Hub and Spoke architecture, create Firewall rules for DMZ, knowledge on Infrastructure Management, Application and Data Domains Experience in Backup and Data loss strategy, creating and automating an incident response plan, monitoring for security events, and application security testing High level knowledge in IDS/IPS, SIEM and SOAR Knowledge and understanding of global security framework, implementation of best practices of Security and Governance Framework (ISO, NIST, CIS, OWASP, GDPR, ITIL) Architecting a high-level solution and documentation (BRD, HLD, LLD & SCR) Strong stakeholder management. #c2c #w2 #fulltime #permanent #directhire #hiring #job
4
-
Ramya Sri
Share resumes to c.ramya@canopyone.com NOTE : Need candidates local to New Jersey who can attend in-person interview Position: AWS Engineer Location: Hamilton, NJ (Need candidates local to New Jersey who can attend in-person interview) - Hybrid Responsibilities • Background and experience providing DevOps support to Cloud deployed Applications • Strong experience in production support, for the applications in production or to be deployed into production • A deep understanding and familiarity with: Linux, and Web/Application servers - Apache, Nginx, Tomcat, etc. • Deployment, automation, management, and maintenance of AWS cloud-based production system • Monitoring and logging systems –Cloudwatch, Cloudtrail, Elastic Search • Networking knowledge - Firewalls, VPNs, proxies & Load balancers • Ensuring availability, performance, security, and scalability of AWS production systems. • Experience with Python, shell or other scripting language • Manages and maintains the hardware, software, security, and connectivity to the Internet as well as middleware components • Good understanding on User management,both for Windows and Linux machines. Integration with AD group on-prem, SSOwould be good to have • Software Development Fundamentals,Problem Solving, Documentation Skills, Verbal Communication, ApplicationMaintenance, Application and System Security, Promotes Team Building and ProcessImprovement, System Administration. • Troubleshoots and resolves systemservice failures by identifying and analyzing the situation and providescorrective actions. • Monitors systems activities and fine tunes system parameters and configuration to optimize performance and ensure security of systems. • Monitor daily systems, evaluate the availability of all server resources and carry out all Linux server tasks • Provision of critical system security by leveraging best practices and prolific cloud security solutions. • Fault finding, analysis and of logging information for reporting of performance exceptions • Proactively monitoring system performance and capacity planning • Integrate automated testing frameworks into the CI/CD pipelines to ensure code quality and reliability. • Implement blue-green deployments, canary releases, and other deployment strategies to minimize downtime and risk • Manage, coordinate, and implement software upgrades, patches, hot fixes on servers, workstations, and network hardware • Create and modify scripts or applications to perform tasks • Thorough understanding of protocols such as DNS, HTTP, LDAP, SMTP, and SNMP • Create and maintain documentation related to system configurations and procedures • Definition and deployment of systems for metrics, logging, and monitoring on AWS platform. #AWSEngineer; #devops; #aws; #python; #shellscripting;#cicd; #newjerseyjobs; #opentowork;
9
5 Comments -
Ankita Tripathi
#JOB OPPORTUNITY 💼🚀 Senior Backend Developer Location- Cincinnati, OH, Job Duration - 6 Months Please share your updated resume on ankita@teknobit.com Skills/Experience: Microservices (MUST-HAVE!!!!) Experience working with high-volume applications. Both GO and Java experience. Note: Candidates must be able to collaborate with offshore teams. Project: Backend development on a customer platform/profile. Will also provide production support. Interview: First round: 30-minute screening with Tech lead/HM. Second Round: Panel interview with technical screening. Pre-screen: 5 questions, a game and a coding challenge. Highly Preferred: Need candidates who has worked at the enterprise-level companies. Preferably retail or e-commerce domain experience. Job Description: The Developer is responsible for leading the design, development, testing, debugging, maintaining and documenting software components to which they are assigned. The Developer will be involved in the technical design process and completes estimates and work plans for design, development, implementation, and rollout tasks. The Developer also communicates with the appropriate teams to ensure that assignments are delivered with the highest of quality and in accordance to standards. The Developer strives to continuously improve the software delivery processes and practices. Specific skillsets: * Senior Backend services developer * Experience with high throughput, distributed multi-region applications and databases * Backend REST microservices development experience * Deploying and maintaining microservices to Kubernetes * Expertise in Go language and/or Java (preferably Go) * Cloud expertise (preferably Azure) * Experience with distributed SQL and NoSQL databases * Expertise with CI/CD workflows (preferably Github) * Ability to work with remote offshore teams Key Responsibilities • Lead and participate in the design and implementation of large and/or architecturally significant applications. • Champion company standards and best practices. Work to continuously improve software delivery processes and practices. • Build partnerships across the application, business and infrastructure teams. • Develop programming specifications. Design, code and unit test application code using Software Development Life Cycle (SDLC) best practices. • Complete estimates and work plans independently as appropriate for design, development, implementation and rollout tasks. • Create technical system documentation and ensure that this documentation remains current throughout all phases of the SDLC. • Participate in all phases of system testing. • Communicate with the appropriate teams to ensure that assignments are managed appropriately and that completed assignments are of the highest quality. hashtag #Backend #Developer #OH #hybrid #hiringalert #gethired #jobopening #jobfair #recruiting #hiring #joinourteam #opentowork #jobsearch #hireme #jobhunt #jobseeker
8
5 Comments -
Aashish Aash
#HiringAlert! #Hiring for #ContractRole Please share me your Resume if you are interested at aashish@yochana.com Role : Cloud Architect with SRE Experience Location : Memphis, TN – Hybrid Job Description: The ideal candidate will possess comprehensive expertise in cloud architecture and site reliability engineering. They should have a solid foundation in designing scalable and resilient cloud solutions and hands-on experience in migrating and modernizing applications to enhance operational efficiency and cloud agility. Cloud Architecture Design: Develop and implement cloud architecture solutions that align with business objectives and support scalability, availability, and performance requirements. Design resilient, fault-tolerant systems that leverage cloud-native technologies and best practices. Site Reliability Engineering (SRE): Implement SRE principles and practices to ensure the reliability, availability, and performance of cloud-based services. Define Service Level Objectives (SLOs), Service Level Indicators (SLIs), and Error Budgets to measure and manage service reliability. Cloud Migration and Modernization: Lead the migration of legacy applications to cloud platforms, ensuring seamless integration and minimal disruption to operations. Modernize existing applications by refactoring, containerizing, or adopting serverless architectures for improved scalability and efficiency. Infrastructure as Code (IaC): Utilize Infrastructure as Code (IaC) tools such as Terraform or CloudFormation to automate the provisioning and management of cloud infrastructure. Implement configuration management and version control practices to ensure consistency and traceability of infrastructure changes. Continuous Integration/Continuous Deployment (CI/CD): Establish CI/CD pipelines for automated build, test, and deployment processes, enabling rapid and reliable software delivery. Integrate monitoring, logging, and alerting mechanisms into CI/CD pipelines to detect and respond to issues proactively. Performance Optimization: Identify performance bottlenecks and optimize cloud resources to improve application performance and reduce costs. Implement caching strategies, load balancing, and auto-scaling mechanisms to optimize resource utilization and enhance user experience. Security and Compliance: Implement security best practices and compliance controls to protect cloud-based systems and data. Conduct regular security assessments and audits to identify and mitigate potential risks and vulnerabilities.
3
-
Neeraj Rathi
Job Title- DevOps Engineer Location – San Francisco, CA Job Description- AWS Certification Experience in using Terraform to manage AWS Programmable Infrastructures Must have architected and implemented the Cloud Infrastructure Automation scripts to create and maintain various target environments like Dev, Stage, QA, Integration and Production in AWS environments Experience with advanced features like S3 backends and State file locks in Terraform. Experience in developing an end to end AWS native platform for building Data lakes ( S3, Glue (Crawlers, ETL, Catalog), IAM, CodePipeline, CodeCommit, CloudTrail, CloudWatch, AWS Config, Guard Duty, Secrets Manager, KMS, EC2, Data Visualization Tool like Tableau run on an EC2 or AWS Quicksight, Athena #devops #cloud #aws #programming #cloudcomputing #technology #developer #linux #python #coding #azure #software #iot #cybersecurity #kubernetes #it #css #javascript #java #devopsengineer #tech #ai #datascience #docker #softwaredeveloper #webdev #machinelearning #programmer #bigdata #security #automation #Terraform #Athena #ec2 #IAM #cloud
41
92 Comments -
Md Rahmat Ali
Position: Sr AWS Infra+Devops engineer Location: Raleigh North Carolina Job Type: Contract Share resume at _ mrahmat@greattechglobal.com SR Band: B3 AWS implementation 7-10 years of overall experience and having an experience of managing configuring cloud implementation project. AWS Infra Terraform experience is must for building the infrastructure. AWS ECS, EKS, ECR Hands on experience is required. AWS FIPS endpoints experience/knowledge with hands on implementation. Build, configure, and manage cloud compute and data storage infrastructure for multiple instances of AWS Platform. Build and deploy VPCs, security groups, and user access to our various public cloud systems and services through terraform. Develop processes and procedures for using cloud-based infrastructures, including, access key rotation, disaster recovery, and building new services. Develop scripts and workflows to manage cloud computing systems. Hands-on experience running cloud infrastructure in public clouds like AWS. Strong organization skills as you will manage costs and system access to AWS deployments. Cloud DB Management Network expertise including routing, IP addresses and subnets, firewalls, load balancers, IP SecureVPN, etc. DevOps scripting (YAML, terraform & Python) Worked with CI and CD tools (CodeBuild, Code Pipeline) and Source Repositories (Code Commit) Landing Zones, CloudFront, Firewall, Routes, VPNs, NACL, IGW, NAT, VPN, Direct Connect, VPC Peering etc) Hands-on experience with centralized logging and monitoring solutions for CloudTrail, Config, Guard Duty, Security HUB, Landing Zone, Lambda, ECS and EKS, WAF, Migration Hub, and SEIM. Mandatory experience in Selfhosted Jenkins deployment server for CI/CD Jenkins (for build Jenkins pipeline for deploy infrastructure, EKS clusters and DB's and Ec2's ) Argo CD (who can build, automate, install and configure Argo CD with EKS cluster (Continuous deployment) on aws in each environment to scale/deploy application code on EKS cluster using Argo CD deployments and integrate with Jenkins. Using github/gitlab as a source code version for versioning Hands on experience on implementation with AWS WAF and create automation through terraform/ enable on load balancer. Experience in hands on implementation with AWS HA proxy and create automation through terraform and used on load balancer. Should have experience on SIEM tools to integrate with monitoring for required AWS services for gathering security information and event management task to make sure the platform is secure. Responsible for managing and upgrading DevOps toolsets. Experience in creating Jenkins CICD pipeline from sratch using automation script. Should have hand on experience in Installing, integrating, managing complete CICD pipeline, which can build and deploy to cloud Experience in working with team and understand requirement to build CICD pipeline and AgroCD pipelines.
6
10 Comments -
sundeep s
Warm Greetings!! This is Sundeep from TOPSYSIT, Currently, we are seeking an AWS Cloud API Developer and its Onsite Role. Loc: Charlotte, NC & Austin, TX. Exp: 10+ years Work Auth: Any Visa is fine on W2/C2C/1099 If #GCor #GCEAD who can provide proper documentation is eligible for this role. (#NO OPT & CPT) #Job #Responsibilities: Develop, deploy, and maintain cloud-based APIs using AWS services. Design and implement RESTful APIs using Lambda functions, Python, and API Gateway. Collaborate with cross-functional teams to gather requirements and define API specifications. Ensure high availability, scalability, and security of APIs in the cloud environment. Implement authentication, authorization, and tokenization mechanisms for API security. Optimize API performance and latency by leveraging AWS infrastructure and services. Integrate APIs with various backend systems and third-party services. Develop API documentation and provide technical support to internal and external stakeholders. Monitor, troubleshoot, and resolve issues related to API functionality and performance. Stay updated with AWS best practices and emerging technologies in cloud development. Demonstrate a strong understanding of AWS big data architecture and its application in cloud development projects. Implement security measures and tokenization techniques for APIs. Enable API-to-API and API-to-application communication. Create and maintain API documentation. Implement alerting and monitoring systems for APIs. Utilize step functions and Terraform for infrastructure and API development. Develop and maintain RESTful APIs, focusing on automated unit testing and ensuring code quality. Write unit test cases and adhere to Test-driven development (TDD) practices. Have used Pytest and Pycharm for the unit testing framework and have written a test cases. Ensure API security measures are implemented, particularly on the API Gateway side. Collaborate with cross-functional teams to explain project requirements and handle tough situations effectively. Stored Procs (AWS Postgres) to API integration. Handle large datasets efficiently and effectively in API development. #Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. Proven experience as a software developer with expertise in cloud-based API development. Strong knowledge of AWS services such as Lambda, API Gateway, DynamoDB, S3, and CloudWatch. Proficiency in programming languages such as Python, Node.js, or Java. Experience with RESTful API design principles and best practices. Familiarity with authentication protocols like OAuth, JWT, and API key management. Ability to work in an Agile development environment and collaborate effectively with teams. Excellent problem-solving and analytical skills with attention to detail. Strong communication skills and the ability to articulate technical concepts to non-technical stakeholders. AWS certification (e.g., AWS Certified Developer) is a plus. Thanks Sundeep
1
-
Amith Kumar
Hi All, Hot Requirement Java Backend with AWS - Remote - NJ(need Local only) - GC & USC - 12+ years Solid Java Backend with AWS & Rule Engine Setup Please share the profiles to amith.kumar@3sbc.com Thanks, Amith Kumar #javabackend hashtag #javarequirements hashtag #javac2c hashtag #javaaws hashtag #javagc hashtag #javausc hashtag #javaremote hashtag #javaruleenginesetup
2
-
Yashika Rai
#Local to #Atlanta Job Title : Sr. Full Stack Developer Location : Atlanta , Georgia Must : Java , (Angular( preferred) or React , AWS We need - US citizen , Green card holder , EAD (except OPT) Qualifications: This is CTH Role. What you need to success (Minimum Qualifications): • Experience in cloud-native development, RESTful APIs, and stateless microservices architectures. • 3-5 years of experience with Java 11/17/21 J2EE and the SpringBoot framework and o strong working knowledge of developing and deploying applications in major cloud providers (AWS or Azure), Swagger Design using Open API using Yaml, Json. • Knowledge of concepts, values, tools applied in building Continuous Integration (CI), Continuous Delivery and Continuous Deployment (CD) pipeline, Build, Implement and Maintain CI/CD Pipelines to achieve the automation of Software delivery process. Preferred Tekton Pipeline or Gitlab CI. • Working Knowledge of AWS CDK (typescript is preferred), CloudFormation Templates (CFT), IaC (Infrastructure as code) Debugging Infrastructure issues in CDK and CFT. • Experience working with containers, and managed container orchestrators (Kubernetes/OpenShift) in the cloud (For example Red Hat OpenShift (RedHatOpenshift on AWS), ROSA AWS Elastic Kubernetes Service, AWS Kubernetes Service, Red Hat OpenShift). • Experience fixing Security issues like Veracode issues in Application and other compliance issues in Infrastructure code (IAC). • Shell Scripting knowledge in Linux, Python. • Experience with the core AWS services like Api gateway, Lambdas, S3, SQS, Step function, SNS, EC2, Code Pipeline, Athena, DynamoDB, RDS. Strong understanding of core AWS services and apply best practices regarding security and scalability. • Experience implementing code coverage, code quality, considering Observability, Monitoring, Logging, and Tracing as part of development leveraging managed services like AWS cloud watch, SumoLogic and Dynatrace etc. • Knowledge of streaming using Kafka, relational and NoSQL databases like Dynamo DB, Mongo DB and web service development standards and practices, including RESTful APIs, microservices, and SOA services. • Experience with Test Driven Development, exposure to Behavior Driven Development • Knowledge of Agile methodologies and experience working in an agile development environment using workload management tools like VersionOne/Agility. • Passion for driving continuous improvement. • Proactive and able to quickly pick up new technologies. • Strong technical and non-technical communication skills. Assures smooth flowing, timely transmission of critical information. • Excellent judgment and problem-solving skills; should be able to resolve problems in a calm and quick manner and display a high degree of initiative and drive. • Embraces diverse people, thinking and styles. What will give you a competitive edge.. • AWS Certification is a plus. • Airline industry experience is a plus. Email : yashikairecruiter@gmail.com
12
10 Comments -
Atul Sharma
Need DEVOPS ENGINEER !! Davie , FL Experience with Kubernetes, GitOps, Mobile app, and Microsoft Azure is must. Must be local to FL with local ID . Please share resume to atul.sharma@3bstaffing.com #devopsengineer #devops #linux #aws #cloudcomputing #devopstraining #softwaredeveloper #python #devopstools #softwareengineer #programming #devopscommunity #kubernetes #docker #developer #coding #devopslife #awscloud #cloud #software #jenkins #azure #github #technology #cloudengineer #ansible #java #computerscience #devopsjobs #datascience
15
5 Comments -
Pranay M
Job Title: Java Angular Location: TX, FL, NJ, DE, OH, GA, NE (Need Locals only) Visa: H4EAD, H1B, GC, GCEAD Job Description: Have worked on Angular 7 and above, developing components and consuming REST APIs; Have developed REST Services / API using Spring Boot with any relational / document database as backend; Have worked on web application development using Java 8+ and Spring Boot; Have knowledge on consuming SOAP services using spring framework; Good to have implemented Swagger UI in REST Services for interactive UI testing; Have thorough knowledge on Unit testing Angular using Jasmine, Karma; Have good knowledge on Unit testing REST APIs; Have proficient understanding of HTML5, CSS3, JavaScript and Typescript; Thank you, Pranay #onsite #hybrid #remote #jobs #devops #dotnet #sdet #qa #nodejs #react #fullstack #frontend #hadoop #qualityengineer #contract #job #jobopening #openings #hiring #hiringimmediately #hotlist #sales #benchsales #benchsalesrecruiter #benchsalesrecruiters #benchsale #vendorlist #vendors #vendorslist #vendor #recruitment #recruiters #c2c #c2crequirements #c2cvendors #c2chotlist #c2cjobs #c2cusajobs #c2crequirement #c2chiring #c2croles #c2crecruiters #corp2corp #updating #requirements #requirement #c2creqs #c2cvendorlist #list #post #comment #distribution #sales #email #recruitment #linkedin #recuriting #salesrecruiters #salesrecruiter #salesrecruitment #salesrecruiting #hotlist #hotlists #hotlisting #hiring #preferred #dailyrequirements #urgentrequirement #urgenthiring #opt #usc #h1b #usitrecruitment #usitrecruiters #usbenchsales #opportunity #c2c #c2crequirements #c2crecruiters #c2cjobs #c2cconsultants #c2cvendors #c2chiring #c2chotlist #c2cavailable #c2creqs #c2croles #c2cusajobs #c2cvendorlist #c2crequirement #c2cvendores #c2cvendors #c2cvendorlist #c2cvendor #w2requirements #w2 #w2only #crop2crop #c2c #c2crecruiters #c2cconsultant #c2crequirements #c2cconsultants #c2cconsultats #c2cjobs #c2cvendors #c2creqs #c2chiring #c2chotlist #c2crequirement #c2cvendorlist #c2cvendores #c2cvendor #c2cavailable #c2croles #c2cusajobs #croptocrop #contract #vendorslist #usitrecruiting #usitrecruiters #usitbenchsales #usitjobs #usitrecruiter #usitstaffing #uscitizen #uscitizenship #usajobs #greencard #gcead #GC #w2job #w2contract #w2 #w2requirement #w2only #w2jobs #w2requirements #w2contract #hexaware #linkedinposts #linkedin #linkedinfamily #linkedinpost #linkedinconnections #linkedinconnection #linkedinjobs #uscitizen #usc #greencard #GC #Hybrid #remoteworking #remoteroles #remotework #remote #remotejobs #remotejob #remoterole #remoteopportunties #remoteroles #remote #linkedinposts #linkedinfamily #linkedinconnections #linkedinpost #linkedinconnection #linkedin
2
18 Comments -
Ranga N
Hi Sales Recruiters Hope you’re doing well Please find below urgent rek and send me if you have suitable profiles Please share only local who can go onsite and work from day one ranga@torquetek.com Job: Kubernetes Engineer Location: Remote Duration: Long Term 5 Yrs. of Exp – Kubernetes, 5 Yrs. of Exp – Golang,Argo, 4 Yrs. of Exp – , Flux, Required Qualifications · Bachelor's or master's degree in Computer Science, Engineering, or a related field. 7-10 years of experience building platforms. Strong background in Kubernetes architecture, deployment, and management. Deep understanding of Kubernetes, including security, the object model, and architecture. · Strong understanding of K8s ecosystem tooling: Argocd, Flux, ingress controllers, service-mesh. · Expertise building CI/CD pipelines for Kubernetes environments using tools such as Jenkins, GitLab CI, or GitHub. Proficiency with Golang. · Strong understanding of the SDLC and experience of how developers develop release and operate systems. Strong opinions on how software development should be done. Proficiency in GitOps and the software development life cycle in cloud-native and Kubernetes environments. · · Excellent problem-solving and teamwork skills. · Strong communication and documentation abilities. Preferred Qualifications: · Experience with Terraform. · Experience with OAM and Kubevela. · Experience writing controllers via Kubebuilder or Crossplane. · Experience with GitLab · Experience in implementing and managing service mesh solutions like Istio or Linkerd to · Improve micro service communication and reliability within Kubernetes. · Experience writing tests using the ginkgo/gomega frameworks. Responsibilities · Design and implement a platform specification file based on the OAM format that conforms to both developer and platform infrastructure requirements. · Leverage your experience with K8s and infrastructure to ensure that deployments are secure, scalable, and conform to best practices. · Collaborate with team that manages underlying infrastructure to ensure that they are providing the required capabilities for you to build and manage your platform. Design a developer experience around the specifications to make it as easy to use as possible. · Built integrations using code or scripting to deliver this experience. Consider experiences related to workload deployment and day 2 operations. Create a set of recommendations for how development teams should consume the specification file to reduce the risk of outages. · Build out an SDLC for the specification to support fast code development and the ability to update the specification file without causing downtime or impacting customers. Design and implement release, development, and testing processes.
3
2 Comments -
Ranga N
Hi Sales Recruiters Hope you’re doing well Please find below urgent rek and send me if you have suitable profiles Please share only local who can go onsite and work from day one ranga@torquetek.com Job: Kubernetes Engineer Location: Remote Duration: Long Term 5 Yrs. of Exp – Kubernetes, 5 Yrs. of Exp – Golang,Argo, 4 Yrs. of Exp – , Flux, Required Qualifications Bachelor's or master's degree in Computer Science, Engineering, or a related field. 7-10 years of experience building platforms. Strong background in Kubernetes architecture, deployment, and management. Deep understanding of Kubernetes, including security, the object model, and architecture. · Strong understanding of K8s ecosystem tooling: Argocd, Flux, ingress controllers, service-mesh. · Expertise building CI/CD pipelines for Kubernetes environments using tools such as Jenkins, GitLab CI, or GitHub. Proficiency with Golang. · Strong understanding of the SDLC and experience of how developers develop release and operate systems. Strong opinions on how software development should be done. Proficiency in GitOps and the software development life cycle in cloud-native and Kubernetes environments. · · Excellent problem-solving and teamwork skills. · Strong communication and documentation abilities. Preferred Qualifications: · Experience with Terraform. · Experience with OAM and Kubevela. · Experience writing controllers via Kubebuilder or Crossplane. · Experience with GitLab · Experience in implementing and managing service mesh solutions like Istio or Linkerd to · Improve micro service communication and reliability within Kubernetes. · Experience writing tests using the ginkgo/gomega frameworks. Responsibilities · Design and implement a platform specification file based on the OAM format that conforms to both developer and platform infrastructure requirements. · Leverage your experience with K8s and infrastructure to ensure that deployments are secure, scalable, and conform to best practices. · Collaborate with team that manages underlying infrastructure to ensure that they are providing the required capabilities for you to build and manage your platform. Design a developer experience around the specifications to make it as easy to use as possible. · Built integrations using code or scripting to deliver this experience. Consider experiences related to workload deployment and day 2 operations. Create a set of recommendations for how development teams should consume the specification file to reduce the risk of outages. · Build out an SDLC for the specification to support fast code development and the ability to update the specification file without causing downtime or impacting customers. Design and implement release, development, and testing processes.
5
-
Ranga N
Hi Sales Recruiters Hope you’re doing well Please find below urgent rek and send me if you have suitable profiles Please share only local who can go onsite and work from day one ranga@torquetek.com Job: Kubernetes Engineer Location: Remote Duration: Long Term 5 Yrs. of Exp – Kubernetes, 5 Yrs. of Exp – Golang,Argo, 4 Yrs. of Exp – , Flux, Required Qualifications · Bachelor's or master's degree in Computer Science, Engineering, or a related field. 7-10 years of experience building platforms. Strong background in Kubernetes architecture, deployment, and management. Deep understanding of Kubernetes, including security, the object model, and architecture. · Strong understanding of K8s ecosystem tooling: Argocd, Flux, ingress controllers, service-mesh. · Expertise building CI/CD pipelines for Kubernetes environments using tools such as Jenkins, GitLab CI, or GitHub. Proficiency with Golang. · Strong understanding of the SDLC and experience of how developers develop release and operate systems. Strong opinions on how software development should be done. Proficiency in GitOps and the software development life cycle in cloud-native and Kubernetes environments. · · Excellent problem-solving and teamwork skills. · Strong communication and documentation abilities. Preferred Qualifications: · Experience with Terraform. · Experience with OAM and Kubevela. · Experience writing controllers via Kubebuilder or Crossplane. · Experience with GitLab · Experience in implementing and managing service mesh solutions like Istio or Linkerd to · Improve micro service communication and reliability within Kubernetes. · Experience writing tests using the ginkgo/gomega frameworks. Responsibilities · Design and implement a platform specification file based on the OAM format that conforms to both developer and platform infrastructure requirements. · Leverage your experience with K8s and infrastructure to ensure that deployments are secure, scalable, and conform to best practices. · Collaborate with team that manages underlying infrastructure to ensure that they are providing the required capabilities for you to build and manage your platform. Design a developer experience around the specifications to make it as easy to use as possible. · Built integrations using code or scripting to deliver this experience. Consider experiences related to workload deployment and day 2 operations. Create a set of recommendations for how development teams should consume the specification file to reduce the risk of outages. · Build out an SDLC for the specification to support fast code development and the ability to update the specification file without causing downtime or impacting customers. Design and implement release, development, and testing processes.
8
6 Comments
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More