Elasticsearch is an advanced open source search server based on Lucene and written in Java.
It provides distributed full and partial text, query-based and geolocation-based search functionality accessible through an HTTP REST API.
Cluster Health provides a lot of information about the cluster, such as the number of shards that are allocated ("active") as well as how many are unassigned and relocating. In addition, it provides the current number of nodes and data nodes in the cluster, which can allow you to poll for missing nodes (e.g., if you expect it to be 15
, but it only shows 14
, then you are missing a node).
For someone that knows about Elasticsearch, "assigned" and "unassigned" shards can help them to track down issues.
The most common field checked from Cluster Health is the status
, which can be in one of three states:
The colors each mean one -- and only one -- very simple thing:
initializing_shards
).initializing_shards
).relocating_shards
.Elasticsearch comes with a set of defaults that provide a good out of the box experience for development. The implicit statement there is that it is not necessarily great for production, which must be tailored for your own needs and therefore cannot be predicted.
The default settings make it easy to download and run multiple nodes on the same machine without any configuration changes.
Inside each installation of Elasticsearch is a config/elasticsearch.yml
. That is where the following settings live:
cluster.name
elasticsearch
.node.*
node.name
node.master
true
, it means that the node is an eligible master node and it can be the elected master node.true
, meaning every node is an eligible master node.node.data
true
, it means that the node stores data and handles search activity.true
.path.*
path.data
./data
.
data
will be created for you as a peer directory to config
inside of the Elasticsearch directory.path.logs
./logs
.network.*
network.host
_local_
, which is effectively localhost
.
network.bind_host
network.host
.network.publish_host
network.bind_host
, this should be the one host that is intended to be used for inter-node communication.discovery.zen.*
discovery.zen.minimum_master_nodes
(M / 2) + 1
where M
is the number of eligible master nodes (nodes using node.master: true
implicitly or explicitly).1
, which only is valid for a single node cluster!discovery.zen.ping.unicast.hosts
network.publish_host
of those other nodes.localhost
, which means it only looks on the local machine for a cluster to join.Elasticsearch provides three different types of settings:
Depending on the setting, it can be:
Always check the documentation for your version of Elasticsearch for what you can or cannot do with a setting.
You can set settings a few ways, some of which are not suggested:
In Elasticsearch 1.x and 2.x, you can submit most settings as Java System Properties prefixed with es.
:
$ bin/elasticsearch -Des.cluster.name=my_cluster -Des.node.name=`hostname`
In Elasticsearch 5.x, this changes to avoid using Java System Properties, instead using a custom argument type with -E
taking the place of -Des.
:
$ bin/elasticsearch -Ecluster.name=my_cluster -Enode.name=`hostname`
This approach to applying settings works great when using tools like Puppet, Chef, or Ansible to start and stop the cluster. However it works very poorly when doing it manually.
The order that settings are applied are in the order of most dynamic:
If the setting is set twice, once at any of those levels, then the highest level takes effect.
It's easy to see type
s like a table in an SQL database, where the index
is the SQL database. However, that is not a good way to approach type
s.
In fact, types are literally just a metadata field added to each document by Elasticsearch: _type
. The examples above created two types: my_type
and my_other_type
. That means that each document associated with the types has an extra field automatically defined like "_type": "my_type"
; this is indexed with the document, thus making it a searchable or filterable field, but it does not impact the raw document itself, so your application does not need to worry about it.
All types live in the same index, and therefore in the same collective shards of the index. Even at the disk level, they live in the same files. The only separation that creating a second type provides is a logical one. Every type, whether it's unique or not, needs to exist in the mappings and all of those mappings must exist in your cluster state. This eats up memory and, if each type is being updated dynamically, it eats up performance as the mappings change.
As such, it is a best practice to define only a single type unless you actually need other types. It is common to see scenarios where multiple types are desirable. For example, imagine you had a car index. It may be useful to you to break it down with multiple types:
This way you can search for all cars or limit it by manufacturer on demand. The difference between those two searches are as simple as:
GET /cars/_search
and
GET /cars/bmw/_search
What is not obvious to new users of Elasticsearch is that the second form is a specialization of the first form. It literally gets rewritten to:
GET /cars/_search
{
"query": {
"bool": {
"filter": [
{
"term" : {
"_type": "bmw"
}
}
]
}
}
}
It simply filters out any document that was not indexed with a _type
field whose value was bmw
. Since every document is indexed with its type as the _type
field, this serves as a pretty simple filter. If an actual search had been provided in either example, then the filter would be added to the full search as appropriate.
As such, if the types are identical, it's much better to supply a single type (e.g., manufacturer
in this example) and effectively ignore it. Then, within each document, explicitly supply a field called make
or whatever name you prefer and manually filter on it whenever you want to limit to it. This will reduce the size of your mappings to 1/n
where n
is the number of separate types. It does add another field to each document, at the benefit of an otherwise simplified mapping.
In Elasticsearch 1.x and 2.x, such a field should be defined as
PUT /cars
{
"manufacturer": { <1>
"properties": {
"make": { <2>
"type": "string",
"index": "not_analyzed"
}
}
}
}
In Elasticsearch 5.x, the above will still work (it's deprecated), but the better way is to use:
PUT /cars
{
"manufacturer": { <1>
"properties": {
"make": { <2>
"type": "keyword"
}
}
}
}
Types should be used sparingly within your indices because it bloats the index mappings, usually without much benefit. You must have at least one, but there is nothing that says you must have more than one.
At the index level, there is no difference between one type being used with a few fields that are sparsely used and between multiple types that share a bunch of non-sparse fields with a few not shared (meaning the other type never even uses the field(s)).
Said differently: a sparsely used field is sparse across the index regardless of types. The sparsity does not benefit -- or really hurt -- the index just because it is defined in a separate type.
You should just combine these types and add a separate type field.
Because each field is really only defined once at the Lucene level, regardless of how many types there are. The fact that types exist at all is a feature of Elasticsearch and it is only a logical separation.
No. If you manage to find a way to do so in ES 2.x or later, then you should open up a bug report. As noted in the previous question, Lucene sees them all as a single field, so there is no way to make this work appropriately.
ES 1.x left this as an implicit requirement, which allowed users to create conditions where one shard's mappings in an index actually differed from another shard in the same index. This was effectively a race condition and it could lead to unexpected issues.
n
to 1).Analyzers take the text from a string field and generate tokens that will be used when querying.
An Analyzer operates in a sequence:
CharFilters
(Zero or more)Tokenizer
(One)TokenFilters
(Zero or more)The analyzer may be applied to mappings so that when fields are indexed, it is done on a per token basis rather than on the string as a whole. When querying, the input string will also be run through the Analyzer. Therefore, if you normalize text in the Analyzer, it will always match even if the query contains a non-normalized string.