-
Notifications
You must be signed in to change notification settings - Fork 231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove piped write in when using Java Storage Client #1133
Remove piped write in when using Java Storage Client #1133
Conversation
/gcbrun |
1 similar comment
/gcbrun |
/gcbrun |
private int writeInternal(ByteBuffer byteBuffer) throws IOException { | ||
int bytesWritten = writableByteChannel.write(byteBuffer); | ||
logger.atFinest().log( | ||
"%d bytes were written out of provided buffer of capacity %d", | ||
bytesWritten, byteBuffer.limit()); | ||
return bytesWritten; | ||
} | ||
|
||
@Override | ||
public int write(ByteBuffer src) throws IOException { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do we guarantee that all bytes requested are actually written?
https://docs.oracle.com/javase/8/docs/api/java/nio/channels/WritableByteChannel.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be fine. the write() interface is coming from WritableByteChannel and the caller will be handling this.
private int writeInternal(ByteBuffer byteBuffer) throws IOException { | ||
int bytesWritten = writableByteChannel.write(byteBuffer); | ||
logger.atFinest().log( | ||
"%d bytes were written out of provided buffer of capacity %d", | ||
bytesWritten, byteBuffer.limit()); | ||
return bytesWritten; | ||
} | ||
|
||
@Override | ||
public int write(ByteBuffer src) throws IOException { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
call to storage writer
is blocking, any other subsequent and may effect the runtime. Have we taken this into account?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even the pipe based implementation will block after the buffer fills up. Since the caller is expected to be much faster than network, most of the time the buffering is not likely to help. In fact based on our benchmarks, the non-pipe based implementation is faster and consumed less memory.
@@ -111,121 +99,32 @@ public static void cleanUp() { | |||
@Test | |||
public void writeMultipleChunksSuccess() throws IOException { | |||
int numberOfChunks = 10; | |||
writeChannel.initialize(); | |||
ByteString data = | |||
GoogleCloudStorageTestHelper.createTestData( | |||
MAX_WRITE_CHUNK_BYTES.getNumber() * numberOfChunks); | |||
writeChannel.write(data.asReadOnlyByteBuffer()); | |||
writeChannel.close(); | |||
// Fake writer only writes half the buffer at a time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fakeWriteChannel
implementation only writes half a buffer at a time, we should not revert it with just one write call instead fix the code to validate that whole byte buffer is written properly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should add test for Exception being thrown from java-storage write channel.
gcsio/src/test/java/com/google/cloud/hadoop/gcsio/GoogleCloudStorageImplCreateTest.java
Outdated
Show resolved
Hide resolved
/gcbrun |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some minor comments, overall LGTM.
@@ -248,7 +243,7 @@ public void testCreateObjectApiInterruptedException() throws Exception { | |||
.thenReturn( | |||
new FakeWriteChannel() { | |||
@Override | |||
public int write(ByteBuffer src) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Is it really need to be handed over to
close
function. - Given we don't have pip anymore for javastorage writer, this test seems to be an overkill for javastorage. Should be bypass it?
@@ -343,8 +343,9 @@ public void uploadViaPCUInvalidPartFileNamePrefix() throws IOException { | |||
byte[] bytesToWrite = new byte[partFileCount * bufferCapacity]; | |||
GoogleCloudStorageTestHelper.fillBytes(bytesToWrite); | |||
WritableByteChannel writeChannel = gcs.create(resourceId); | |||
writeChannel.write(ByteBuffer.wrap(bytesToWrite)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why it wasn't showing up in write
earlier? Is it because actual remote wasn't made till close
but with pipe removal it will?
No description provided.