有 Java 编程相关的问题?

你可以在下面搜索框中键入要查询的问题!

将本地文件系统复制到HDFS时发生java封闭通道异常

我正在尝试将本地文件系统目录中的所有文件复制到Hadoop文件夹中。我正在使用Ubuntu虚拟机在我的家用计算机上运行单节点设置。这是一个多线程应用程序(ThreadPoolExecutor),但我认为这不会导致我遇到的问题。当我用hdfs运行它时。close()命令,我得到多个通道关闭异常。我是不是不恰当地关闭了频道

我遇到的另一个问题是这些文件根本没有压缩。输出文件实际上比原始文件大一点。这些文件只是使用dd从CLI生成的随机字节。这些文件类型不可压缩吗

    public void run(){

        for (int i = 0; i< f.length; i++){
            FileSystem hdfs = null;
            FileInputStream is = null;
            OutputStream os = null;
            CompressionCodec compCodec = null;
            CompressionOutputStream compOs = null;

            try{
                String fname = f[i].getName();
                hdfs = FileSystem.get(URI.create(dest), config);
                is = new FileInputStream(f[i].getAbsolutePath());
                //InputStream is = new BufferedInputStream(fin);
                os = hdfs.create(new Path(dest + '/' + fname), true);
                System.out.println(dest+ '/' + fname);
                compCodec = new GzipCodec();                
                compOs = compCodec.createOutputStream(os);
                IOUtils.copyBytes(is, compOs, config);                

            }catch(IOException e){
                e.printStackTrace();
            }
            finally{
                try{
                    compOs.finish();
                    IOUtils.closeStream(is);
                    IOUtils.closeStream(compOs);
                    is.close();
                    os.close();
                    //hdfs.close();
                }catch(IOException e){
                    e.printStackTrace();
                }
            }
        }

编辑:堆栈跟踪

java.nio.channels.ClosedChannelException
        at org.apache.hadoop.hdfs.ExceptionLastSeen.throwException4Close(ExceptionLastSeen.java:73)
        at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:153)
        at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105)
        at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
        at java.io.DataOutputStream.write(DataOutputStream.java:107)
        at java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:253)
        at java.util.zip.DeflaterOutputStream.write(DeflaterOutputStream.java:211)
        at java.util.zip.GZIPOutputStream.write(GZIPOutputStream.java:145)
        at org.apache.hadoop.io.compress.GzipCodec$GzipOutputStream$ResetableGZIPOutputStream.write(GzipCodec.java:77)
        at org.apache.hadoop.io.compress.GzipCodec$GzipOutputStream.write(GzipCodec.java:118)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:96)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:114)
        at edu.jhu.bdpuh.hdfs.ThreadedCopy.run(ThreadedCopy.java:49)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

共 (0) 个答案