We're updating the issue view to help you get more done. 

1 datanode cluster - repeated append/close - DFSClient thinks datanode is faulty

Description

Environment - vm with 1 datanode
The following test fails:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 Configuration conf = new Configuration(); conf.set("fs.defaultFS", "hdfs://10.0.2.15:8020"); UserGroupInformation ugi = UserGroupInformation.createRemoteUser("vagrant"); DistributedFileSystem fs = (DistributedFileSystem) FileSystem.get(conf); String filePath = "/test/file"; fs.delete(new Path(filePath), true); fs.create(new Path(filePath)).close(); //first append multiple writes to the out stream FSDataOutputStream out = fs.append(new Path(filePath)); out.write(1); out.write(2); out.write(3); out.close(); //second append out = fs.append(new Path(filePath)); out.write(1); //test fails here due to "dfs.client.block.write.replace-datanode-on-failure.policy" //setting this policy to NEVER makes the test pass out.close(); fs.close(); " This test will pass: " Configuration conf = new Configuration(); conf.set("fs.defaultFS", "hdfs://10.0.2.15:8020"); //FIX?! Workaround? conf.set("dfs.client.block.write.replace-datanode-on-failure.policy", "NEVER"); UserGroupInformation ugi = UserGroupInformation.createRemoteUser("vagrant"); DistributedFileSystem fs = (DistributedFileSystem) FileSystem.get(conf); String filePath = "/test/file"; fs.delete(new Path(filePath), true); fs.create(new Path(filePath)).close(); //first append multiple writes to the out stream FSDataOutputStream out = fs.append(new Path(filePath)); out.write(1); out.write(2); out.write(3); out.close(); //second append out = fs.append(new Path(filePath)); out.write(1); out.close(); fs.close();

Status

Assignee

Unassigned

Reporter

Alex Ormenisan

Labels

None

Priority

Medium