**EDIT:** Apologies for format, can seem to get it so work correctly.

Hello, I'm hoping to run Elasticsearch aggregation functions however my dataset no longer fits the index/mapping I originally implemented. I originally only had a single data dimension, however now need to handle 2D data points (Well, 3D if including time). Now I'm shoving an **array of data pairs(2 element array)** into a field that is of type **float** which is working, but I can't seem to run any Elasticsearch aggregation functions with the array data. I know I can achieve this with a structured, nested type and would like to reindex to this.

The current definition for the field (Called Value) is: "Value": { "type": "float" }

I have created a new index with the following mappings that I would like to re-index to. The Value field is defined as the following in the mappings:

> mapping_1d - "Value": { "type": "float" }
    mapping_2d - "Value": {
    "type": "nested",
    "properties": {
    "A": {
    "type": "float"
                "B": {
                "type": "integer"

What I'm struggling with is the inline script to map the data correctly in the _reindex POST call. The pseudo code would be something along the lines of:
>     IF (Name contains "2D Data") THEN
>     FOR EACH element IN OldValue
>     mapping_2d
>     Value.A = element[0]
>     Value.B = element[1]
    Value = OldValue

Basically I want to map the old values based on whether they are an array or not. If it's an array, I want to put it in a mapping called "mapping_2d" and store in the appropriate fields. If it is a single value I just want to simply pipe it to "mapping_1d". Is this possible to achieve using 'painless'? How would I go about constructing the script?

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB