{"_id":"58fbd36812294e0f00a8f52b","project":"5511fc8c0c1a08190077f90c","category":{"_id":"58fbcd136b29580f00d8ff3a","__v":0,"project":"5511fc8c0c1a08190077f90c","version":"5511fc8d0c1a08190077f90f","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2017-04-22T21:37:23.604Z","from_sync":false,"order":1,"slug":"audio-retrieval-and-analysis","title":"Audio Retrieval and Analysis"},"version":{"_id":"5511fc8d0c1a08190077f90f","__v":11,"project":"5511fc8c0c1a08190077f90c","createdAt":"2015-03-25T00:08:45.273Z","releaseDate":"2015-03-25T00:08:45.273Z","categories":["5511fc8d0c1a08190077f910","5511fd52c1b13537009f5d31","568ecb0cbeb2700d004717ee","568ecb149ebef90d0087271a","568ecb1cbdb9260d00149d42","56a6a012b3ffe00d00156f1e","56a6bfe37ef6620d00e2f25f","58fbccb5809fc30f00f2dc03","58fbcd136b29580f00d8ff3a","5942ec4d50b8a900373ce9ff","59481476d305c20019295d8c"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"1.0.0","version":"1.0"},"user":"58fbcc0bd8c0ba0f00cf52d6","__v":0,"parentDoc":null,"updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-04-22T22:04:24.464Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":2,"body":"Our hardware approach is based on the [Particle](https://store.particle.io/#photon) platform, along with a custom PCB containing all the sensors, audio amplification and digitalization components. You can get more information on our hardware platform on [This section of the documentation] (https://docs.opensourcebeehives.com/docs/alpha-sensor-kit). It has been thoroughly tested and will receive periodic firmware updates, so we recommend using it.\n\nHowever, if you would like to use your own hardware, there are a few key features you should keep in mind:\n[block:api-header]\n{\n  \"title\": \"Connectivity\"\n}\n[/block]\nWi-Fi or mobile data. If there is a strict bandwith constraint, it is certainly possible to estimate the beehive state locally and limit the data output to estimates and limited sensor readings, instead of streaming audio periodically. Either way, **some** data will need to be sent somewhere, so a minimum degree of connectivity is needed.\n[block:api-header]\n{\n  \"title\": \"I/O capabilities\"\n}\n[/block]\nThe system should be able to receive external inputs, either analog or digital (natively or paired with an external ADC), at a high enough sampling rate. We recommend a minimum of 6 KHz for audio.\n[block:api-header]\n{\n  \"title\": \"Processing Power and Memory\"\n}\n[/block]\nThe selected hardware needs to handle all the data readings, buffering and necessary connections. Additionally, if local processing is desired, it will need to be able to run the feature extraction and classification algorithms. With the current status of these algorithms, this would mean computing a relatively low order (8-10) filtering operation in 7 different frequency bands in real time at the desired sampling rate.\n\n----------\nOn the firmware side of things, you should take the following into account:\n[block:api-header]\n{\n  \"title\": \"Message Formatting\"\n}\n[/block]\nWe will be using [InfluxDB](https://www.influxdata.com/) to handle the storage of data samples on the server side. Since the message formatting for InfluxDB takes up a lot of unnecessary bandwidth, we will be discussing a way to save it. We recommend formatting the messages as HTTP POST requests:\n\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"    POST /write?db=AudioData HTTP/1.1\\n    User-Agent: DEVICEID\\n    Host: HOST:8086\\n    Accept: */*\\n    Connection: keep-alive\\n    Content-Type: application/x-www-form-urlencoded\\n    Content-Length: \\n    \\n    0.123123123\\n    -0.123123123\\n    ...\\n         \",\n      \"language\": \"curl\",\n      \"name\": \"Audio\"\n    }\n  ]\n}\n[/block]\n\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"    POST /write?db=AudioData HTTP/1.1\\n    User-Agent: DEVICEID\\n    Host: HOST:8086\\n    Accept: */*\\n    Connection: keep-alive\\n    Content-Type: text/plain\\n    Content-Length: \\n    \\n    0 - 12 (beehive state index)  \",\n      \"language\": \"html\",\n      \"name\": \"Beehive States\"\n    }\n  ]\n}\n[/block]\n\n[block:api-header]\n{\n  \"title\": \"Local Processing\"\n}\n[/block]\nLocal processing algorithms are based on our [Theory behind Audio Analysis](doc:theory-behind-audio-analysis) and can be found in the following [repository](https://github.com/opensourcebeehives/MachineLearning-Local).\n\nA very general program for a Local Processing solution using C++ would look something like this:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"int main()\\n{\\n\\tvector<Filter> filters=createFilters();\\n\\tFeatureExtractor fex(filters, samplingRate, windowLength);\\t\\n\\n\\t//create classifier for Active, Swarm, Pre-Swarm states\\n\\tClassifier c(\\\"f1,0.220705,1,4\\\\nf6,0.028664,2,3\\\\ns1\\\\ns4\\\\ns3\\\\n\\\");\\n  \\n  //Output: Energy vector time series\\n\\tvector < vector < float > > energy;\\n\\tvector < int > DetectedStates;\\n\\tvector <float> energy_local;\\n\\t\\n\\t//Read data source\\t\\n\\tifstream data (\\\"../data_path\\\");\\n  \\n\\tif(data.is_open ())\\n\\t{\\t\\n\\t\\t//Input: x\\t\\t\\n\\t\\tfloat x;\\n\\t\\twhile (data >> x)\\n\\t\\t{\\n\\t\\t\\t//Update feature extractor\\n\\t\\t\\tfex.update(x);\\n\\t\\t\\t\\n\\t\\t\\t//If feature extractor is ready, classify\\n\\t\\t\\tif(fex.isReady()){\\n\\t\\t\\t\\tenergy_local=fex.getEnergy();\\n\\t\\t\\t\\tDetectedStates.push_back(c.classify(energy_local));\\n\\t\\t\\t\\tfex.clearEnergy();\\n\\n\\t\\t\\t\\t//If we get to 5 classifications, perform majority voting and output state\\n\\t\\t\\t\\tif(DetectedStates.size()==5){\\n\\t\\t\\t\\t\\tcout<<states[majorityVoting(DetectedStates)];\\n\\t\\t\\t\\t\\tcout<<\\\"\\\\n\\\";\\n\\t\\t\\t\\t\\tDetectedStates.clear();\\n\\t\\t\\t\\t}\\n\\t\\t\\t}\\n\\t\\t}\\n\\t}\\n\\tdata.close();\",\n      \"language\": \"cplusplus\"\n    }\n  ]\n}\n[/block]","excerpt":"","slug":"hardware-recommendations","type":"basic","title":"Hardware Recommendations"}

Hardware Recommendations


Our hardware approach is based on the [Particle](https://store.particle.io/#photon) platform, along with a custom PCB containing all the sensors, audio amplification and digitalization components. You can get more information on our hardware platform on [This section of the documentation] (https://docs.opensourcebeehives.com/docs/alpha-sensor-kit). It has been thoroughly tested and will receive periodic firmware updates, so we recommend using it. However, if you would like to use your own hardware, there are a few key features you should keep in mind: [block:api-header] { "title": "Connectivity" } [/block] Wi-Fi or mobile data. If there is a strict bandwith constraint, it is certainly possible to estimate the beehive state locally and limit the data output to estimates and limited sensor readings, instead of streaming audio periodically. Either way, **some** data will need to be sent somewhere, so a minimum degree of connectivity is needed. [block:api-header] { "title": "I/O capabilities" } [/block] The system should be able to receive external inputs, either analog or digital (natively or paired with an external ADC), at a high enough sampling rate. We recommend a minimum of 6 KHz for audio. [block:api-header] { "title": "Processing Power and Memory" } [/block] The selected hardware needs to handle all the data readings, buffering and necessary connections. Additionally, if local processing is desired, it will need to be able to run the feature extraction and classification algorithms. With the current status of these algorithms, this would mean computing a relatively low order (8-10) filtering operation in 7 different frequency bands in real time at the desired sampling rate. ---------- On the firmware side of things, you should take the following into account: [block:api-header] { "title": "Message Formatting" } [/block] We will be using [InfluxDB](https://www.influxdata.com/) to handle the storage of data samples on the server side. Since the message formatting for InfluxDB takes up a lot of unnecessary bandwidth, we will be discussing a way to save it. We recommend formatting the messages as HTTP POST requests: [block:code] { "codes": [ { "code": " POST /write?db=AudioData HTTP/1.1\n User-Agent: DEVICEID\n Host: HOST:8086\n Accept: */*\n Connection: keep-alive\n Content-Type: application/x-www-form-urlencoded\n Content-Length: \n \n 0.123123123\n -0.123123123\n ...\n ", "language": "curl", "name": "Audio" } ] } [/block] [block:code] { "codes": [ { "code": " POST /write?db=AudioData HTTP/1.1\n User-Agent: DEVICEID\n Host: HOST:8086\n Accept: */*\n Connection: keep-alive\n Content-Type: text/plain\n Content-Length: \n \n 0 - 12 (beehive state index) ", "language": "html", "name": "Beehive States" } ] } [/block] [block:api-header] { "title": "Local Processing" } [/block] Local processing algorithms are based on our [Theory behind Audio Analysis](doc:theory-behind-audio-analysis) and can be found in the following [repository](https://github.com/opensourcebeehives/MachineLearning-Local). A very general program for a Local Processing solution using C++ would look something like this: [block:code] { "codes": [ { "code": "int main()\n{\n\tvector<Filter> filters=createFilters();\n\tFeatureExtractor fex(filters, samplingRate, windowLength);\t\n\n\t//create classifier for Active, Swarm, Pre-Swarm states\n\tClassifier c(\"f1,0.220705,1,4\\nf6,0.028664,2,3\\ns1\\ns4\\ns3\\n\");\n \n //Output: Energy vector time series\n\tvector < vector < float > > energy;\n\tvector < int > DetectedStates;\n\tvector <float> energy_local;\n\t\n\t//Read data source\t\n\tifstream data (\"../data_path\");\n \n\tif(data.is_open ())\n\t{\t\n\t\t//Input: x\t\t\n\t\tfloat x;\n\t\twhile (data >> x)\n\t\t{\n\t\t\t//Update feature extractor\n\t\t\tfex.update(x);\n\t\t\t\n\t\t\t//If feature extractor is ready, classify\n\t\t\tif(fex.isReady()){\n\t\t\t\tenergy_local=fex.getEnergy();\n\t\t\t\tDetectedStates.push_back(c.classify(energy_local));\n\t\t\t\tfex.clearEnergy();\n\n\t\t\t\t//If we get to 5 classifications, perform majority voting and output state\n\t\t\t\tif(DetectedStates.size()==5){\n\t\t\t\t\tcout<<states[majorityVoting(DetectedStates)];\n\t\t\t\t\tcout<<\"\\n\";\n\t\t\t\t\tDetectedStates.clear();\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tdata.close();", "language": "cplusplus" } ] } [/block]