SSH Tunnel to access EC2 Hadoop Cluster -


background :

  1. i have installl 3 node cloudera hadoop cluster on ec2 instance workin expected.

  2. client program on windows machine load data machine hdfs.

details :

my client program has developed in java reads data windows local disk , write hdfs.

for trying create ssh tunnel through putty , trying login windows username remote ec2 instance not working. able login unix username. wanted understand correct behavior?

i don't know have created tunnel correctly or not after when try run client program gives me below error :

my client program has developed in java reads data windows local disk , write hdfs. when trying run programs givin me below error.

priviledgedactionexception as:ubuntu (auth:simple) cause:java.io.ioexception: file /user/ubuntu/features.json replicated 0 nodes instead of minreplication (=1).  there 3 datanode(s) running , 3 node(s) excluded in operation.  6:32:45.711 pm     info     org.apache.hadoop.ipc.server       ipc server handler 13 on 8020, call org.apache.hadoop.hdfs.protocol.clientprotocol.addblock 108.161.91.186:54097: error: java.io.ioexception: file /user/ubuntu/features.json replicated 0 nodes instead of minreplication (=1).  there 3 datanode(s) running , 3 node(s) excluded in operation. java.io.ioexception: file /user/ubuntu/features.json replicated 0 nodes instead of minreplication (=1).  there 3 datanode(s) running , 3 node(s) excluded in operation.     @ org.apache.hadoop.hdfs.server.blockmanagement.blockmanager.choosetarget(blockmanager.java:1331)     @ org.apache.hadoop.hdfs.server.namenode.fsnamesystem.getadditionalblock(fsnamesystem.java:2198)     @ org.apache.hadoop.hdfs.server.namenode.namenoderpcserver.addblock(namenoderpcserver.java:480)     @ org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocolserversidetranslatorpb.addblock(clientnamenodeprotocolserversidetranslatorpb.java:299)     @ org.apache.hadoop.hdfs.protocol.proto.clientnamenodeprotocolprotos$clientnamenodeprotocol$2.callblockingmethod(clientnamenodeprotocolprotos.java:44954)     @ org.apache.hadoop.ipc.protobufrpcengine$server$protobufrpcinvoker.call(protobufrpcengine.java:453)     @ org.apache.hadoop.ipc.rpc$server.call(rpc.java:1002)     @ org.apache.hadoop.ipc.server$handler$1.run(server.java:1701)     @ org.apache.hadoop.ipc.server$handler$1.run(server.java:1697)     @ java.security.accesscontroller.doprivileged(native method)     @ javax.security.auth.subject.doas(subject.java:396)     @ org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation.java:1408)     @ org.apache.hadoop.ipc.server$handler.run(server.java:1695) 

any idea?

you can verify hdfs cluster health hdfs fsck / -delete, can rebalance datanodes.


Comments

Popular posts from this blog

ios - UICollectionView Self Sizing Cells with Auto Layout -

node.js - ldapjs - write after end error -

DOM Manipulation in Wordpress (and elsewhere) using php -