-
The properties
drill.enable_unsafe_memory_access
andarrow.enable_unsafe_memory_access
are prefixed withsiren
and their default value is set totrue
. The first property is deprecated. -
In order to avoid conflict with a version of
netty
used in Elasticsearch, we relocate the netty custom package and dependency inmemory
into a package namedsiren
. The relocation is achieved thanks to the maven shade plugin. -
The Siren's fork of
netty
is used invector
. This means thatnetty
imports in that module need to be prefixed withsiren
.
- In order to check that Siren version of Netty is being used,
run the unit test
CheckAccessibleTest
inhttps://github.com/sirensolutions/siren-platform/blob/master/core/src/test/java/io/siren/federate/core/common/CheckAccessibleTest.java
. - Note: the unit test
CheckAccessibleTest
is currently ignored, please set it again to ignore after running the test. The unit test is ignored because the setting inCheckAccessibleTest
is not taken into account when the whole unit test suite is run, therefore it fails. This could be because when the class is loaded, the default settings are used (which is a static block) and the new settings in theCheckAccessibleTest
are then not applied when the test suit is run.
To build the memory
, format
and vector
modules:
$ cd java
$ mvn clean package
Because of the default value change of unsafe_memory_access
property, some
tests in vector
fail.
mvn -pl memory,memory/memory-core,memory/memory-netty,memory/memory-unsafe,format,vector install -Dsiren.arrow.enable_unsafe_memory_access=false -Dsiren.drill.enable_unsafe_memory_access=false
-
Tests should pass.
-
Make a new version:
mvn versions:set -DnewVersion=siren-0.14.1-2
- tag the commit for the release
git tag --sign siren-0.14.1-2
- Deploy to Siren's artifactory
$ mvn deploy -DskipTests=true -P artifactory -Dartifactory_username=<USERNAME> -Dartifactory_password=<PASSWORD>
Developer tips on updating to a new version of Netty can be found here: https://sirensolutions.atlassian.net/wiki/spaces/EN/pages/3108864001/Upgrading Federate Apache Arrow Version .
- add
[email protected]:apache/arrow.git
as theupstream
remote. - execute
git fetch --all --tags
- create a temporary branch from
siren-changes
- rebase against the new tag.
Apache Arrow is a development platform for in-memory analytics. It contains a set of technologies that enable big data systems to process and move data fast.
Major components of the project include:
- The Arrow Columnar In-Memory Format: a standard and efficient in-memory representation of various datatypes, plain or nested
- The Arrow IPC Format: an efficient serialization of the Arrow format and associated metadata, for communication between processes and heterogeneous environments
- The Arrow Flight RPC protocol: based on the Arrow IPC format, a building block for remote services exchanging Arrow data with application-defined semantics (for example a storage server or a database)
- C libraries
- C bindings using GLib
- C# .NET libraries
- Gandiva: an LLVM-based Arrow expression compiler, part of the C codebase
- Go libraries
- Java libraries
- JavaScript libraries
- Plasma Object Store: a shared-memory blob store, part of the C codebase
- Python libraries
- R libraries
- Ruby libraries
- Rust libraries
Arrow is an Apache Software Foundation project. Learn more at arrow.apache.org.
The reference Arrow libraries contain many distinct software components:
- Columnar vector and table-like containers (similar to data frames) supporting flat or nested types
- Fast, language agnostic metadata messaging layer (using Google's Flatbuffers library)
- Reference-counted off-heap buffer memory management, for zero-copy memory sharing and handling memory-mapped files
- IO interfaces to local and remote filesystems
- Self-describing binary wire formats (streaming and batch/file-like) for remote procedure calls (RPC) and interprocess communication (IPC)
- Integration tests for verifying binary compatibility between the implementations (e.g. sending data from Java to C )
- Conversions to and from other in-memory data structures
- Readers and writers for various widely-used file formats (such as Parquet, CSV)
The official Arrow libraries in this repository are in different stages of implementing the Arrow format and related features. See our current feature matrix on git master.
Please read our latest project contribution guide.
Even if you do not plan to contribute to Apache Arrow itself or Arrow integrations in other projects, we'd be happy to have you involved:
- Join the mailing list: send an email to [email protected]. Share your ideas and use cases for the project.
- Follow our activity on GitHub issues
- Learn the format
- Contribute code to one of the reference implementations