--- apiVersion: v1 kind: List items: - apiVersion: v1 kind: ConfigMap metadata: labels: funktion.fabric8.io/kind: Connector provider: fabric8 project: connector-hdfs2 version: 1.1.27 group: io.fabric8.funktion.connector name: hdfs2 data: deployment.yml: | --- apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: funktion.fabric8.io/kind: Subscription connector: hdfs2 spec: replicas: 1 template: metadata: labels: funktion.fabric8.io/kind: Subscription connector: hdfs2 spec: containers: - image: fabric8/connector-hdfs2:1.1.27 name: connector schema.yml: | --- component: kind: component scheme: hdfs2 syntax: hdfs2:hostName:port/path title: HDFS2 description: For reading/writing from/to an HDFS filesystem using Hadoop 2.x. label: hadoop,file deprecated: false async: false javaType: org.apache.camel.component.hdfs2.HdfsComponent groupId: org.apache.camel artifactId: camel-hdfs2 version: 2.18.1 componentProperties: jAASConfiguration: kind: property type: object javaType: javax.security.auth.login.Configuration deprecated: false secret: false description: To use the given configuration for security with JAAS. properties: hostName: kind: path group: common required: true type: string javaType: java.lang.String deprecated: false secret: false description: HDFS host to use port: kind: path group: common type: integer javaType: int deprecated: false secret: false defaultValue: "8020" description: HDFS port to use path: kind: path group: common required: true type: string javaType: java.lang.String deprecated: false secret: false description: The directory path to use connectOnStartup: kind: parameter group: common type: boolean javaType: boolean deprecated: false secret: false defaultValue: true description: Whether to connect to the HDFS file system on starting the producer/consumer. If false then the connection is created on-demand. Notice that HDFS may take up till 15 minutes to establish a connection as it has hardcoded 45 x 20 sec redelivery. By setting this option to false allows your application to startup and not block for up till 15 minutes. fileSystemType: kind: parameter group: common type: string javaType: org.apache.camel.component.hdfs2.HdfsFileSystemType enum: - LOCAL - HDFS deprecated: false secret: false defaultValue: HDFS description: Set to LOCAL to not use HDFS but local java.io.File instead. fileType: kind: parameter group: common type: string javaType: org.apache.camel.component.hdfs2.HdfsFileType enum: - NORMAL_FILE - SEQUENCE_FILE - MAP_FILE - BLOOMMAP_FILE - ARRAY_FILE deprecated: false secret: false defaultValue: NORMAL_FILE description: The file type to use. For more details see Hadoop HDFS documentation about the various files types. keyType: kind: parameter group: common type: string javaType: org.apache.camel.component.hdfs2.WritableType enum: - NULL - BOOLEAN - BYTE - INT - FLOAT - LONG - DOUBLE - TEXT - BYTES deprecated: false secret: false defaultValue: NULL description: The type for the key in case of sequence or map files. owner: kind: parameter group: common type: string javaType: java.lang.String deprecated: false secret: false description: The file owner must match this owner for the consumer to pickup the file. Otherwise the file is skipped. valueType: kind: parameter group: common type: string javaType: org.apache.camel.component.hdfs2.WritableType enum: - NULL - BOOLEAN - BYTE - INT - FLOAT - LONG - DOUBLE - TEXT - BYTES deprecated: false secret: false defaultValue: BYTES description: The type for the key in case of sequence or map files bridgeErrorHandler: kind: parameter group: consumer label: consumer type: boolean javaType: boolean optionalPrefix: consumer. deprecated: false secret: false defaultValue: false description: Allows for bridging the consumer to the Camel routing Error Handler which mean any exceptions occurred while the consumer is trying to pickup incoming messages or the likes will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions that will be logged at WARN/ERROR level and ignored. delay: kind: parameter group: consumer label: consumer type: integer javaType: long deprecated: false secret: false defaultValue: "1000" description: The interval (milliseconds) between the directory scans. initialDelay: kind: parameter group: consumer label: consumer type: integer javaType: long deprecated: false secret: false description: For the consumer how much to wait (milliseconds) before to start scanning the directory. pattern: kind: parameter group: consumer label: consumer type: string javaType: java.lang.String deprecated: false secret: false defaultValue: '*' description: The pattern used for scanning the directory sendEmptyMessageWhenIdle: kind: parameter group: consumer label: consumer type: boolean javaType: boolean optionalPrefix: consumer. deprecated: false secret: false defaultValue: false description: If the polling consumer did not poll any files you can enable this option to send an empty message (no body) instead. exceptionHandler: kind: parameter group: consumer (advanced) label: consumer,advanced type: object javaType: org.apache.camel.spi.ExceptionHandler optionalPrefix: consumer. deprecated: false secret: false description: To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options is not in use. By default the consumer will deal with exceptions that will be logged at WARN/ERROR level and ignored. exchangePattern: kind: parameter group: consumer (advanced) label: consumer,advanced type: string javaType: org.apache.camel.ExchangePattern enum: - InOnly - RobustInOnly - InOut - InOptionalOut - OutOnly - RobustOutOnly - OutIn - OutOptionalIn deprecated: false secret: false description: Sets the exchange pattern when the consumer creates an exchange. pollStrategy: kind: parameter group: consumer (advanced) label: consumer,advanced type: object javaType: org.apache.camel.spi.PollingConsumerPollStrategy optionalPrefix: consumer. deprecated: false secret: false description: A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. append: kind: parameter group: producer label: producer type: boolean javaType: boolean deprecated: false secret: false defaultValue: false description: Append to existing file. Notice that not all HDFS file systems support the append option. overwrite: kind: parameter group: producer label: producer type: boolean javaType: boolean deprecated: false secret: false defaultValue: true description: Whether to overwrite existing files with the same name blockSize: kind: parameter group: advanced label: advanced type: integer javaType: long deprecated: false secret: false defaultValue: "67108864" description: The size of the HDFS blocks bufferSize: kind: parameter group: advanced label: advanced type: integer javaType: int deprecated: false secret: false defaultValue: "4096" description: The buffer size used by HDFS checkIdleInterval: kind: parameter group: advanced label: advanced type: integer javaType: int deprecated: false secret: false defaultValue: "500" description: How often (time in millis) in to run the idle checker background task. This option is only in use if the splitter strategy is IDLE. chunkSize: kind: parameter group: advanced label: advanced type: integer javaType: int deprecated: false secret: false defaultValue: "4096" description: When reading a normal file this is split into chunks producing a message per chunk. compressionCodec: kind: parameter group: advanced label: advanced type: string javaType: org.apache.camel.component.hdfs2.HdfsCompressionCodec enum: - DEFAULT - GZIP - BZIP2 deprecated: false secret: false defaultValue: DEFAULT description: The compression codec to use compressionType: kind: parameter group: advanced label: advanced type: object javaType: org.apache.hadoop.io.SequenceFile.CompressionType deprecated: false secret: false defaultValue: NONE description: The compression type to use (is default not in use) openedSuffix: kind: parameter group: advanced label: advanced type: string javaType: java.lang.String deprecated: false secret: false defaultValue: opened description: When a file is opened for reading/writing the file is renamed with this suffix to avoid to read it during the writing phase. readSuffix: kind: parameter group: advanced label: advanced type: string javaType: java.lang.String deprecated: false secret: false defaultValue: read description: Once the file has been read is renamed with this suffix to avoid to read it again. replication: kind: parameter group: advanced label: advanced type: integer javaType: short deprecated: false secret: false defaultValue: "3" description: The HDFS replication factor splitStrategy: kind: parameter group: advanced label: advanced type: string javaType: java.lang.String deprecated: false secret: false description: 'In the current version of Hadoop opening a file in append mode is disabled since it''s not very reliable. So for the moment it''s only possible to create new files. The Camel HDFS endpoint tries to solve this problem in this way: If the split strategy option has been defined the hdfs path will be used as a directory and files will be created using the configured UuidGenerator. Every time a splitting condition is met a new file is created. The splitStrategy option is defined as a string with the following syntax: splitStrategy=ST:valueST:value... where ST can be: BYTES a new file is created and the old is closed when the number of written bytes is more than value MESSAGES a new file is created and the old is closed when the number of written messages is more than value IDLE a new file is created and the old is closed when no writing happened in the last value milliseconds' synchronous: kind: parameter group: advanced label: advanced type: boolean javaType: boolean deprecated: false secret: false defaultValue: false description: Sets whether synchronous processing should be strictly used or Camel is allowed to use asynchronous processing (if supported). backoffErrorThreshold: kind: parameter group: scheduler label: consumer,scheduler type: integer javaType: int optionalPrefix: consumer. deprecated: false secret: false description: The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. backoffIdleThreshold: kind: parameter group: scheduler label: consumer,scheduler type: integer javaType: int optionalPrefix: consumer. deprecated: false secret: false description: The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. backoffMultiplier: kind: parameter group: scheduler label: consumer,scheduler type: integer javaType: int optionalPrefix: consumer. deprecated: false secret: false description: To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. greedy: kind: parameter group: scheduler label: consumer,scheduler type: boolean javaType: boolean optionalPrefix: consumer. deprecated: false secret: false defaultValue: false description: If greedy is enabled then the ScheduledPollConsumer will run immediately again if the previous run polled 1 or more messages. runLoggingLevel: kind: parameter group: scheduler label: consumer,scheduler type: string javaType: org.apache.camel.LoggingLevel enum: - TRACE - DEBUG - INFO - WARN - ERROR - OFF optionalPrefix: consumer. deprecated: false secret: false defaultValue: TRACE description: The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. scheduledExecutorService: kind: parameter group: scheduler label: consumer,scheduler type: object javaType: java.util.concurrent.ScheduledExecutorService optionalPrefix: consumer. deprecated: false secret: false description: Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. scheduler: kind: parameter group: scheduler label: consumer,scheduler type: string javaType: org.apache.camel.spi.ScheduledPollConsumerScheduler enum: - none - spring - quartz2 optionalPrefix: consumer. deprecated: false secret: false defaultValue: none description: To use a cron scheduler from either camel-spring or camel-quartz2 component schedulerProperties: kind: parameter group: scheduler label: consumer,scheduler type: object javaType: java.util.Map prefix: scheduler. multiValue: true deprecated: false secret: false description: To configure additional properties when using a custom scheduler or any of the Quartz2 Spring based scheduler. startScheduler: kind: parameter group: scheduler label: consumer,scheduler type: boolean javaType: boolean optionalPrefix: consumer. deprecated: false secret: false defaultValue: true description: Whether the scheduler should be auto started. timeUnit: kind: parameter group: scheduler label: consumer,scheduler type: string javaType: java.util.concurrent.TimeUnit enum: - NANOSECONDS - MICROSECONDS - MILLISECONDS - SECONDS - MINUTES - HOURS - DAYS optionalPrefix: consumer. deprecated: false secret: false defaultValue: MILLISECONDS description: Time unit for initialDelay and delay options. useFixedDelay: kind: parameter group: scheduler label: consumer,scheduler type: boolean javaType: boolean optionalPrefix: consumer. deprecated: false secret: false defaultValue: true description: Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. documentation.adoc: | [[HDFS2-HDFS2Component]] HDFS2 Component ~~~~~~~~~~~~~~~ *Available as of Camel 2.13* The *hdfs2* component enables you to read and write messages from/to an HDFS file system using Hadoop 2.x. HDFS is the distributed file system at the heart of http://hadoop.apache.org[Hadoop]. Maven users will need to add the following dependency to their `pom.xml` for this component: [source,xml] ------------------------------------------------------------ org.apache.camel camel-hdfs2 x.x.x ------------------------------------------------------------ [[HDFS2-URIformat]] URI format ^^^^^^^^^^ [source,java] ---------------------------------------- hdfs2://hostname[:port][/path][?options] ---------------------------------------- You can append query options to the URI in the following format, `?option=value&option=value&...` + The path is treated in the following way: 1. as a consumer, if it's a file, it just reads the file, otherwise if it represents a directory it scans all the file under the path satisfying the configured pattern. All the files under that directory must be of the same type. 2. as a producer, if at least one split strategy is defined, the path is considered a directory and under that directory the producer creates a different file per split named using the configured link:uuidgenerator.html[UuidGenerator]. When consuming from hdfs2 then in normal mode, a file is split into chunks, producing a message per chunk. You can configure the size of the chunk using the chunkSize option. If you want to read from hdfs and write to a regular file using the file component, then you can use the fileMode=Append to append each of the chunks together. [[HDFS2-Options]] Options ^^^^^^^ // component options: START The HDFS2 component supports 1 options which are listed below. {% raw %} [width="100%",cols="2,1m,7",options="header"] |======================================================================= | Name | Java Type | Description | jAASConfiguration | Configuration | To use the given configuration for security with JAAS. |======================================================================= {% endraw %} // component options: END // endpoint options: START The HDFS2 component supports 41 endpoint options which are listed below: {% raw %} [width="100%",cols="2,1,1m,1m,5",options="header"] |======================================================================= | Name | Group | Default | Java Type | Description | hostName | common | | String | *Required* HDFS host to use | port | common | 8020 | int | HDFS port to use | path | common | | String | *Required* The directory path to use | connectOnStartup | common | true | boolean | Whether to connect to the HDFS file system on starting the producer/consumer. If false then the connection is created on-demand. Notice that HDFS may take up till 15 minutes to establish a connection as it has hardcoded 45 x 20 sec redelivery. By setting this option to false allows your application to startup and not block for up till 15 minutes. | fileSystemType | common | HDFS | HdfsFileSystemType | Set to LOCAL to not use HDFS but local java.io.File instead. | fileType | common | NORMAL_FILE | HdfsFileType | The file type to use. For more details see Hadoop HDFS documentation about the various files types. | keyType | common | NULL | WritableType | The type for the key in case of sequence or map files. | owner | common | | String | The file owner must match this owner for the consumer to pickup the file. Otherwise the file is skipped. | valueType | common | BYTES | WritableType | The type for the key in case of sequence or map files | bridgeErrorHandler | consumer | false | boolean | Allows for bridging the consumer to the Camel routing Error Handler which mean any exceptions occurred while the consumer is trying to pickup incoming messages or the likes will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions that will be logged at WARN/ERROR level and ignored. | delay | consumer | 1000 | long | The interval (milliseconds) between the directory scans. | initialDelay | consumer | | long | For the consumer how much to wait (milliseconds) before to start scanning the directory. | pattern | consumer | * | String | The pattern used for scanning the directory | sendEmptyMessageWhenIdle | consumer | false | boolean | If the polling consumer did not poll any files you can enable this option to send an empty message (no body) instead. | exceptionHandler | consumer (advanced) | | ExceptionHandler | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options is not in use. By default the consumer will deal with exceptions that will be logged at WARN/ERROR level and ignored. | exchangePattern | consumer (advanced) | | ExchangePattern | Sets the exchange pattern when the consumer creates an exchange. | pollStrategy | consumer (advanced) | | PollingConsumerPollStrategy | A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. | append | producer | false | boolean | Append to existing file. Notice that not all HDFS file systems support the append option. | overwrite | producer | true | boolean | Whether to overwrite existing files with the same name | blockSize | advanced | 67108864 | long | The size of the HDFS blocks | bufferSize | advanced | 4096 | int | The buffer size used by HDFS | checkIdleInterval | advanced | 500 | int | How often (time in millis) in to run the idle checker background task. This option is only in use if the splitter strategy is IDLE. | chunkSize | advanced | 4096 | int | When reading a normal file this is split into chunks producing a message per chunk. | compressionCodec | advanced | DEFAULT | HdfsCompressionCodec | The compression codec to use | compressionType | advanced | NONE | CompressionType | The compression type to use (is default not in use) | openedSuffix | advanced | opened | String | When a file is opened for reading/writing the file is renamed with this suffix to avoid to read it during the writing phase. | readSuffix | advanced | read | String | Once the file has been read is renamed with this suffix to avoid to read it again. | replication | advanced | 3 | short | The HDFS replication factor | splitStrategy | advanced | | String | In the current version of Hadoop opening a file in append mode is disabled since it's not very reliable. So for the moment it's only possible to create new files. The Camel HDFS endpoint tries to solve this problem in this way: If the split strategy option has been defined the hdfs path will be used as a directory and files will be created using the configured UuidGenerator. Every time a splitting condition is met a new file is created. The splitStrategy option is defined as a string with the following syntax: splitStrategy=ST:valueST:value... where ST can be: BYTES a new file is created and the old is closed when the number of written bytes is more than value MESSAGES a new file is created and the old is closed when the number of written messages is more than value IDLE a new file is created and the old is closed when no writing happened in the last value milliseconds | synchronous | advanced | false | boolean | Sets whether synchronous processing should be strictly used or Camel is allowed to use asynchronous processing (if supported). | backoffErrorThreshold | scheduler | | int | The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. | backoffIdleThreshold | scheduler | | int | The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. | backoffMultiplier | scheduler | | int | To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. | greedy | scheduler | false | boolean | If greedy is enabled then the ScheduledPollConsumer will run immediately again if the previous run polled 1 or more messages. | runLoggingLevel | scheduler | TRACE | LoggingLevel | The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. | scheduledExecutorService | scheduler | | ScheduledExecutorService | Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. | scheduler | scheduler | none | ScheduledPollConsumerScheduler | To use a cron scheduler from either camel-spring or camel-quartz2 component | schedulerProperties | scheduler | | Map | To configure additional properties when using a custom scheduler or any of the Quartz2 Spring based scheduler. | startScheduler | scheduler | true | boolean | Whether the scheduler should be auto started. | timeUnit | scheduler | MILLISECONDS | TimeUnit | Time unit for initialDelay and delay options. | useFixedDelay | scheduler | true | boolean | Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. |======================================================================= {% endraw %} // endpoint options: END [[HDFS2-KeyTypeandValueType]] KeyType and ValueType +++++++++++++++++++++ * NULL it means that the key or the value is absent * BYTE for writing a byte, the java Byte class is mapped into a BYTE * BYTES for writing a sequence of bytes. It maps the java ByteBuffer class * INT for writing java integer * FLOAT for writing java float * LONG for writing java long * DOUBLE for writing java double * TEXT for writing java strings BYTES is also used with everything else, for example, in Camel a file is sent around as an InputStream, int this case is written in a sequence file or a map file as a sequence of bytes. [[HDFS2-SplittingStrategy]] Splitting Strategy ^^^^^^^^^^^^^^^^^^ In the current version of Hadoop opening a file in append mode is disabled since it's not very reliable. So, for the moment, it's only possible to create new files. The Camel HDFS endpoint tries to solve this problem in this way: * If the split strategy option has been defined, the hdfs path will be used as a directory and files will be created using the configured link:uuidgenerator.html[UuidGenerator] * Every time a splitting condition is met, a new file is created. + The splitStrategy option is defined as a string with the following syntax: splitStrategy=:,:,* where can be: * BYTES a new file is created, and the old is closed when the number of written bytes is more than * MESSAGES a new file is created, and the old is closed when the number of written messages is more than * IDLE a new file is created, and the old is closed when no writing happened in the last milliseconds note that this strategy currently requires either setting an IDLE value or setting the HdfsConstants.HDFS_CLOSE header to false to use the BYTES/MESSAGES configuration...otherwise, the file will be closed with each message for example: [source,java] ----------------------------------------------------------------- hdfs2://localhost/tmp/simple-file?splitStrategy=IDLE:1000,BYTES:5 ----------------------------------------------------------------- it means: a new file is created either when it has been idle for more than 1 second or if more than 5 bytes have been written. So, running `hadoop fs -ls /tmp/simple-file` you'll see that multiple files have been created. [[HDFS2-MessageHeaders]] Message Headers ^^^^^^^^^^^^^^^ The following headers are supported by this component: [[HDFS2-Produceronly]] Producer only +++++++++++++ [width="100%",cols="10%,90%",options="header",] |======================================================================= |Header |Description |`CamelFileName` |*Camel 2.13:* Specifies the name of the file to write (relative to the endpoint path). The name can be a `String` or an link:expression.html[Expression] object. Only relevant when not using a split strategy. |======================================================================= [[HDFS2-Controllingtoclosefilestream]] Controlling to close file stream ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ When using the link:hdfs2.html[HDFS2] producer *without* a split strategy, then the file output stream is by default closed after the write. However you may want to keep the stream open, and only explicitly close the stream later. For that you can use the header `HdfsConstants.HDFS_CLOSE` (value = `"CamelHdfsClose"`) to control this. Setting this value to a boolean allows you to explicit control whether the stream should be closed or not. Notice this does not apply if you use a split strategy, as there are various strategies that can control when the stream is closed. [[HDFS2-UsingthiscomponentinOSGi]] Using this component in OSGi ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ There are some quirks when running this component in an OSGi environment related to the mechanism Hadoop 2.x uses to discover different `org.apache.hadoop.fs.FileSystem` implementations. Hadoop 2.x uses `java.util.ServiceLoader` which looks for `/META-INF/services/org.apache.hadoop.fs.FileSystem` files defining available filesystem types and implementations. These resources are not available when running inside OSGi. As with??`camel-hdfs` component, the default configuration files need to be visible from the bundle class loader. A typical way to deal with it is to keep a copy of??`core-default.xml` (and e.g., `hdfs-default.xml`) in your bundle root. [[HDFS2-Usingthiscomponentwithmanuallydefinedroutes]] Using this component with manually defined routes +++++++++++++++++++++++++++++++++++++++++++++++++ There are two options: 1. Package `/META-INF/services/org.apache.hadoop.fs.FileSystem` resource with bundle that defines the routes. This resource should list all the required Hadoop 2.x filesystem implementations. 2. Provide boilerplate initialization code which populates internal, static cache inside `org.apache.hadoop.fs.FileSystem` class: [source,java] ---------------------------------------------------------------------------------------------------- org.apache.hadoop.conf.Configuration conf = new org.apache.hadoop.conf.Configuration(); conf.setClass("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class, FileSystem.class); conf.setClass("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class, FileSystem.class); ... FileSystem.get("file:///", conf); FileSystem.get("hdfs://localhost:9000/", conf); ... ---------------------------------------------------------------------------------------------------- [[HDFS2-UsingthiscomponentwithBlueprintcontainer]] Using this component with Blueprint container +++++++++++++++++++++++++++++++++++++++++++++ Two options: 1. Package `/META-INF/services/org.apache.hadoop.fs.FileSystem` resource with bundle that contains blueprint definition. 2. Add the following to the blueprint definition file: [source,java] ------------------------------------------------------------------------------------------------------ ... ------------------------------------------------------------------------------------------------------ This way Hadoop 2.x will have correct mapping of URI schemes to filesystem implementations.