Upgrading an existing data node in a Hadoop cluster can be a difficult task. Edge nodes that do not have hdfs components and are purely for client access can be upgraded by decommissioning the node and redeploying with Ambari. Decommission the node first in Ambari. This doe not remove the software. Run yum remove on all components;
This is overkill;
yum remove hcatalog\*
yum remove hive\*
yum remove hbase\*
yum remove zookeeper\*
yum remove oozie\*
yum remove pig\*
yum remove knox\*
yum remove snappy\*
yum remove hadoop-lzo\*
yum remove hadoop\*
yum remove extjs-2.2-1 mysql-connector-java-5.0.8-1\*
yum erase ambari-agent
yum erase ambari-server

In deploying the first time, you should have set up password-less ssh on the client node;

https://ambari.apache.org/1.2.1/installing-hadoop-using-ambari/content/ambari-chap1-5-2.html

Go get the same private key from the node running Ambari Server. Use this to re deploy via Ambari. You should get some warnings once the connection is made – use the python cleanup;

python /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py --silent --skip=users

script to clean things up in bulk and then re-run the host checks.

These users need to be removed;
userdel ambari-qa
userdel oozie
userdel hcat
userdel hive
userdel yarn
userdel hdfs
userdel nagios
userdel mapred
userdel zookeeper
userdel tez
userdel rrdcached
userdel falcon
userdel sqoop