hadoop - distcp - access execute permission error for HDFS file -


i performing distcp between 2 different clusters. doing selectively, goes in file-per-file basis. permissions in both clusters same. user executing distcp same (named xxx in example). encountering issue when copying, asking execution permissions... file!

caused by: org.apache.hadoop.ipc.remoteexception(org.apache.hadoop.security.accesscontrolexception): permission denied: user=xxx, access=execute, inode="/mypath/myfile":xxx:xxx:-rw-r--r-- @ org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.check(fspermissionchecker.java:205) @ org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.checktraverse(fspermissionchecker.java:161) @ org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.checkpermission(fspermissionchecker.java:128) @ org.apache.hadoop.hdfs.server.namenode.fsnamesystem.checkpermission(fsnamesystem.java:4684) @ org.apache.hadoop.hdfs.server.namenode.fsnamesystem.checktraverse(fsnamesystem.java:4660) @ org.apache.hadoop.hdfs.server.namenode.fsnamesystem.getfileinfo(fsnamesystem.java:2911) @ org.apache.hadoop.hdfs.server.namenode.namenoderpcserver.getfileinfo(namenoderpcserver.java:673) @ org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocolserversidetranslatorpb.getfileinfo(clientnamenodeprotocolserversidetranslatorpb.java:643) @ org.apache.hadoop.hdfs.protocol.proto.clientnamenodeprotocolprotos$clientnamenodeprotocol$2.callblockingmethod(clientnamenodeprotocolprotos.java:44128) @ org.apache.hadoop.ipc.protobufrpcengine$server$protobufrpcinvoker.call(protobufrpcengine.java:453) @ org.apache.hadoop.ipc.rpc$server.call(rpc.java:1002) @ org.apache.hadoop.ipc.server$handler$1.run(server.java:1695) @ org.apache.hadoop.ipc.server$handler$1.run(server.java:1691) @ java.security.accesscontroller.doprivileged(native method) @ javax.security.auth.subject.doas(subject.java:396) @ org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation.java:1408) @ org.apache.hadoop.ipc.server$handler.run(server.java:1689)  @ org.apache.hadoop.ipc.client.call(client.java:1225) @ org.apache.hadoop.ipc.protobufrpcengine$invoker.invoke(protobufrpcengine.java:202) @ $proxy10.getfileinfo(unknown source) @ sun.reflect.nativemethodaccessorimpl.invoke0(native method) @ sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:39) @ sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:25) @ java.lang.reflect.method.invoke(method.java:597) @ org.apache.hadoop.io.retry.retryinvocationhandler.invokemethod(retryinvocationhandler.java:164) @ org.apache.hadoop.io.retry.retryinvocationhandler.invoke(retryinvocationhandler.java:83) @ $proxy10.getfileinfo(unknown source) @ org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocoltranslatorpb.getfileinfo(clientnamenodeprotocoltranslatorpb.java:628) @ org.apache.hadoop.hdfs.dfsclient.getfileinfo(dfsclient.java:1545) ... 13 more  2015-05-11 10:22:49,005 info org.apache.hadoop.mapred.tasklogstruncater: initializing logs' truncater mapretainsize=-1 , reduceretainsize=-1 2015-05-11 10:22:49,008 error org.apache.hadoop.security.usergroupinformation: priviledgedactionexception as:xxx (auth:simple) cause:java.io.ioexception: copied: 0 skipped: 0 failed: 1 2015-05-11 10:22:49,008 warn org.apache.hadoop.mapred.child: error running child java.io.ioexception: copied: 0 skipped: 0 failed: 1 @ org.apache.hadoop.tools.distcp$copyfilesmapper.close(distcp.java:582) @ org.apache.hadoop.mapred.maprunner.run(maprunner.java:57) @ org.apache.hadoop.mapred.maptask.runoldmapper(maptask.java:418) @ org.apache.hadoop.mapred.maptask.run(maptask.java:333) @ org.apache.hadoop.mapred.child$4.run(child.java:268) @ java.security.accesscontroller.doprivileged(native method) @ javax.security.auth.subject.doas(subject.java:396) @ org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation.java:1408) @ org.apache.hadoop.mapred.child.main(child.java:262) 2015-05-11 10:22:49,013 info org.apache.hadoop.mapred.task: runnning cleanup task 

where xxx user.

the file in destination cluster has rw-r--r-- permissions set, , folder has rwxr-xr-x. file in origin cluster has rw-r--r-- permissions set, , folder has rwxrwxrwx.

so, it's true, file not have execute permissions set.

but, why distcp asking execution permissions on file? in hdfs, supposedly, execution permissions files have no effect. distcp documentation not requiring execute permissions.

note: using -overwrite option in distcp - nothing else. using cdh4.2.1 distcp version 1.

apparently undocumented quirk on how distcp handles directories. distcp not understand destination file file directory. if file exists tries access directory, when it's file. hence fails due execute permissions.

however, distcp v1 dev , support has been discontinued in favour of distcp v2 (which complete rewrite), replaces distcp on cdh5. error , others regarding directory handling have changed more intuitive, *nix-like schema.


Comments

Popular posts from this blog

android - MPAndroidChart - How to add Annotations or images to the chart -

javascript - Add class to another page attribute using URL id - Jquery -

firefox - Where is 'webgl.osmesalib' parameter? -