-
Notifications
You must be signed in to change notification settings - Fork 236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Anyone wanna help maintain this? #164
Comments
As another absentee kinda-maintainer (I've had a lot less time/ability to contribute since I stopped using the EPOC myself), I'd like to suggest @bschumacher if he is interested, he's done a lot of good work on this project. Will also tag @nh2 in case he's interested in maintaining the C in addition to his Haskell version. |
Thanks for pinging me! While I would like to help, I currently don't have enough free capacity to maintain the C version in addition. I will continue to follow this with interest though. |
i know it doesn't answer your question, but I am starting to think the future of although emotiv has made strides in making the epoc more sensitive, i still kind i did manage to port the 2.7 python code to python 3.3 and latest release of gevent, although, after porting to 3.3, i realized that there was actually no advantage in the port, i plan to keep working on my CyKit version soon, and it does support connecting with OpenVibe I'm not as fluent with C, so i probably couldn't be much help to you there, but I've got a little funny you messaged now Kyle, as I just finished charging my headset yesterday. |
I’ll help on the pull requests and issues for the Python code.
|
@bschumacher, just added you to the org. From the looks of the issues and pull requests, it looks like the python version is getting far more usage than the C version. Anyone know what kinda usage we're seeing of python vs. C vs. whatever else? |
I can't speak for Python vs C, but what people seemed to appreciate about my Haskell version was that I provide ready-built static Linux and Windows executables for my tool that exports the raw data in various ways (as JSON, via TCP for OpenVibe etc). Maybe that's something Emokit could also do? In that case, the C version would be handy. |
Yup, that's totally something I'd meant to do in the first place. Would love to get binaries/packages together, also trying to do that on some other projects. Definitely worth filing more issues about. :) |
I can help maintain this repo if you need any help. When I can, I try to answer questions and help with issues. |
I want! I'm not working yet with it, but soon I finish my masters degree I will be working on it. Anyone knows how to post-analysis the data with affectiv expressiv e cognitiv? Just with the raw EEG? Enviado do meu iPhone
|
I don't want to besmirch the fine work everyone has done so far, because it's a really impressive hack and works surprisingly well given all the weirdnesses of the Emotiv architecture, the differences between models, etc. However -- in testing this for my own use, I have begun to wonder if this code base would benefit from a major refactoring. Part of this may come from my own biases, because I am a cranky old man who came up as a procedural C programmer and I'm not always a fan of the "Pythonic" way of doing things. That said, there is some complexity intrinsic to this code base that could probably be taken out to make it easier to maintain. For one example, I think you could get by without any of the gevent stuff at all -- which could solve a lot of complications related to concurrency. There will still need to be some concurrency for Windows (if you can't use Similarly, I think the EmotivWriter class is probably unnecessary unless someone really intends to expand the writing-output options significantly (which I assume was the original intent). Right now, it's basically a wrapper for the built-in csv class. And I'm not sure if anyone actually uses the EmotivReader class in practice (maybe primarily useful for debugging?), but if not, that could go too. Anyway, for my own private use, I'm currently working on a super-stripped-down version that simply logs all (fully-processed) values to a CSV file (and hopefully draws a display like If folks are interested, I can post it and see if anyone is interested in helping flesh it out or test it on their own systems. Trying to get a basic version done today, but we'll see if that actually happens... |
I used Emokit for the last year and really liked it. The changes @bschumacher made in an OO direction makes it easier to start with Emokit. As I am doing my master thesis at the moment using the EPOC , I'd like to help improve Emokit and answer questions. |
@mattrjohnson As mostly an end-user of this repo for getting data from an EPOC into a data processing scheme, what would help me the most would be really super simple tools, like command line programs, that wrap setup, connection, QC checking, and data logging (preferably with marker signals) into something like:
So I am very interested in seeing your super simple logger as something that sort of fits into such a framework. But I am not enough of a developer to implement stuff like this myself so far. That said, such tools are really more of a subproject, the project as it currently stands has a lot of extras that are good for ongoing development. But I would definitely like to see what you describe as a subproject! |
@mdtdev You can check out our progress so far here: https://bitbucket.org/MattTheGr8/emofox Right now, it appears to work fine with our EPOC on both Mac & Windows. Which isn't TOO surprising given that most of the core code was lifted straight from emokit and was just rearranged a bit to simplify it. Currently it works like this: You run the single Python script (emofox.py), it asks you for a CSV filename for logging data, and then it pops up a window similar to render.py from emokit that displays the incoming data live (including quality values). It can also monitor and log a keyboard keypress (currently set to the '0' key) which our group uses for basic tagging of the beginning and end of events. Hitting the Q or Esc key quits the script and closes the log file. It's not SUPER-flexible right now but it could be made more flexible with a medium-sized amount of work. If you know some Python, you could make some adaptations pretty easily (e.g. if you'd rather not show a visual graph, you could just comment out that stuff and replace it with some print statements). But things like that are not currently set up as explicit options on the user side. (There also aren't a lot of options for data logging -- it basically always logs, and logs everything it can. It would also be nice to add an option for logging keypresses for more than a single key, or things like mouse clicks or whatnot... but none of that is there yet.) There also might be one or two bugs to track down, the main one being that occasionally it seems to drop a sample (or possible get an extra sample) for reasons that aren't entirely clear to me. It might be happening on a lower level (e.g. the system just gives the script an incomplete or badly-formatted packet occasionally). Haven't delved too deeply into it because it happens pretty rarely (once every few minutes, if at all) and doesn't matter for what we're doing, but I don't think it will be TOO hard to figure out when we get the time to look at it. The main benefit is that it is WAY faster than emokit, particularly when it comes to the graphical display (which for us was unusably slow on Windows in emokit). The code I wrote for that isn't particularly flexible, but it seems to work fine on all the computers I tested it on. Basically I pull some tricks to make it as fast as I reasonably can, the main one being that it renders the display in little bits and pieces between samples and then only draws the final display every 17 samples or so. Thus the graphing is not particularly smooth, but it seems to keep up with the pace of data acquisition just fine (i.e. we don't get behind on collecting data samples from the EPOC and the signal correctly renders in real time), even on the relatively slow Windows laptop we're using for data collection. Anyway, feel free to check it out and let me know (either here or on Bitbucket) how it works for you, if anyone would like to contribute or request features, etc. I don't have a ton of spare time to work on tweaks that aren't needed for our work anyway, but I'll try to contribute to the community when I can... |
I'm interested in helping out. I have access to an EPOC which I'll be working with from a Linux machine. I've gotten some preliminary test data off of the device, so my next steps will likely be around validating results and improving some of the tooling. |
Hi @Morgan243, I'm having a really bad time trying to get EEG raw data from an EPOC device. At this very moment, I can read the data but it doesn't look like reliable EEG data, it's more like random numbers, even the battery level oscillates randomly. I really don't know what the problem could be. This is the output I get from hid_info.py: |
Currently this version of emokit does not fully support the Epoc i have posted a list of all 6 keys on another Issue page. If the first number is counting up, then you'll know its being decrypted I'm currently working at putting out a new revision that will be a huge |
Hi @warrenarea, I'm a bit lost here. |
to see the decrypted data, you'll have to print out the "data" variable. after the data = cipher.decrypt, apacket = "" print apacket if it is deciphering correctly, the first number will count up, from 0 - 255 after that, every other number, that is around 127, i'd ignore for now... but the if you get to that point, i'd recommend displaying it on a graph. |
I have finally solved it. For some reason, this Emotiv works with the key of model 2 #264. Thanks a lot @warrenarea |
ah yes, model 2, that would be for the regular Epoc model. This means, that your device has been put in Epoc-mode, which can be set from |
Epoc as you well might know, has the ability to change its model-mode, |
To change the mode, simply plug your device into a USB jack to your computer. Remember, to change the mode, the Epoc must be connected to USB, and turned ON. I do have code in the Issues, that allows you to change the modes from emokit. |
Just to keep everyone updated... you may not have noticed much project activity lately. However, I just want to let you all know, I'm still working daily on a new script for this. I've got past the major hurdles, and the coding should all be 'relatively' down-hill from I think once you see what I have done with this, it will inspire a new wave of projects, Plus, I've made some pretty neat changes to the original emotiv code, that should speed I'm still going to be using Python27, i couldn't justify switching to Py3... also, it will be limited So stay tuned..... If all goes smoothly, it shouldn't be too long before I put up a new release. |
https://discordapp.com/invite/gTYNWc7 You can either download the discord chat messenger, Basically we have the input and the output.... and just need to sort out how *** I have pinned a few of these files on the chat. *** I also compiled a list of floating point numbers that correspond to each Decimal value (ie.4200) The floating point numbers actually repeat and are not unique at all.... i think if someone with also, it seems like if you multiply or add some of them together, they match up, but with a slight would appreciate it if you could take a look... |
So. Uh. Hi. Absentee repo maintainer here.
Just noticed there's tons of issues and pull requests, and I have no idea where to even start on cleaning this up. I haven't touched emokit in a long time.
Anyone interested in helping out on cleaning up issues and bringing in pull requests? I don't want to just start blindly doing so. :)
The text was updated successfully, but these errors were encountered: