The Bionic Sound Project

this girl’s journey to sound

Heart-Stopping Moments Friday, April 3, 2009

Had my appointment with Catherine today.  And we came up with a plan of action.

  1. Borrow the battery from them, to see if it reduces the itching/redness, and if it’s my batteries that are causing the problem.
  2. If that doesn’t work, try out a body processor.

So that’s where we started.  We went into the CI booth, plugged me into the computer, with the intention of changing my MAPs because I cannot handle the intensity of the old MAPs after a year of inactivity.

So here I am, hooked up to the computer, watching the screen, and the first thing I see is red over the internal part.  Everything was recognized and green, except for the internal implant.  Checked it again, made sure everything was connected properly.  No luck.

It was time to call Advanced Bionics for troubleshooting.  Catherine found the “dummy” internal parts, and tried it with their processor.  Everything worked.

I held my breath.  This was not looking good.

We took my processor and attached it to to the dummy.  It worked.

I started to cry.

Crying because my internal processor wasn’t working.  Crying because at the thought of having to go through a third surgery in less than 3 years.  Crying because maybe that’s why my CI wasn’t working back in January.

Catherine picked it up to take it off, and then the computer recognized the internal part!  It was a loose wire.

Talk about a heart-stopping moment.  A huge sigh of relief.

So now I’m sent home with MAPs that are more than 100 points below where I was, and with my IDR reduced to 60 from 75.  One has Fidelity 120,and the other doesn’t.  I’m going to try both, as Mandy said in her last notes that I may do better without Fidelity 120.

30 minutes into wearing the CI, my ear was red and itching, and Catherine confirmed it by seeing that there was a red spot where the CI was.  So now it’s time to find a solution.

It’s amazing how quick the brain adapts to using a CI after not wearing it for so long.  I’m missing it…..and feel like I’m re-experiencing activation day all over again…in the sense that I’m discovering sounds that my hearing aid definitely did not pick up on.

Advertisements
 

The Three-Month Checkup Friday, December 1, 2006

I had my three-month checkup today with Dr. M and saw Megan for a mapping session. It was at 1, but I thought it was at 11 instead and was there early. But it was 1 pm in NY time, so I get credit for that! But Dr. M was nice enough to see me anyway, and had me come back at 1 to see Megan.

Prognosis is that I’m doing great.

Megan changed the M and T levels, and gave me a program that will work in noisy situations. So now I have one for normal life, one for noise, and one for the iPOD.

I did show up with the kumquats, and she laughed. ^.^ Told ya I would bring you some, and I’m one who sticks to my words!

Got 42% on my listening test for speech in noise, which was better than what I got with the hearing aids, but could be better. (I’m a perfectionist). I had a hard time with the beep test, because I thought I was hearing a kind of prolonged high-frequency noise, so I wasn’t sure if I was hearing it or not.

The most interesting part of the visit was when I was talking to Dr. M about the vertigo episodes from October, where he said it could be a migraine. Not the typical migraine that people think of, as there are many types of migraines in different parts of the body, including the abdomen. So it could be a type of cranial migraine, I think that’s what he said.

Anyway, to avoid that, it involves watching my caffeine, chocolate, and yellow cheese(!!!) intake. Managing stress is also another key factor. But the cheese has to do with the dye in it, and all three come from a bean, and they haven’t figured out why it causes that. Caffeine isn’t an issue, as I’m not a coffee drinker and rarely consume soda. I do like chocolate, and cheese to a degree (rather picky though).

Heading back to school tomorrow for three weeks. I’m not ready for another quarter!

I hope the next quarter will rock academically, because I did extremely well with the Cochlear Implant, for my first quarter having it. I made the Dean’s List for the first time what feels like forever and got a 3.58 for the quarter. It was amazing, especially to get the A in that class I was struggling with in terms of access services. We’ll see what happens winter quarter, as I’m choosing to fly solo…

 

Photos From The Film Shoot on September 15, 2006 Friday, September 22, 2006

Mandy’s been bugging me about posting the pictures from last week online, so here they are!

On September 11th, I was asked to be a part of a project that the school is working on. It wasn’t till September 14th, that I found out exactly what the film is about. It’s going to be a recruitment video that is going to be sent out to 2,500 people, about the school itself. One portion of the video discusses services for deaf students, and the part that we participated in was the services that they provide for Cochlear Implant recipients.

 

Mandy and I, before shooting starts

(more…)

 

“It’s A Sound Good Thing” Thursday, September 14, 2006

I’ve been listening to music with the CI at home for the last few hours, as opposed to being hooked up to the iPOD. It’s so hard to describe music with the new map now, but basically my mind is just blown.

At 9:07 pm, I heard my first actual “s” in a song. I heard “Sugar baby” in Crazy Town’s “Butterfly”. I know that I can expect the “S” in my mind, but when you actually experience hearing it in a song (especially one that is considered nu-metal/rap-rock), it’s amazing!

I’ve written about listening to music since the day I’ve been activated. With each mapping session, sounds evolve and change over time, that I just get more amazed with each nuance and discovery that I make with the CI. It’s very interesting for me to analyze it, and what I am missing.

Right now, it sounds so much clearer (especially with the HA on at the same time), but I am getting so much more information in the CI ear that I can’t hear with the HA.

The reason I can tell the difference is because I actually can hear or “feel” the stimulation of the sound, whereas the same feeling/sound does not translate to the HA ear.

That’s one way I have learned to compare if sounds are the same, based on the “feel” in my ear (if I can’t understand it). For a person who may not understand or have the appreciation for sound, this may be difficult to understand. You can differentiate between sound, by its own distinct “feel”, and I am not sure how much of a role residual hearing plays in that.

This whole experience with music is kind of disturbing to me, because I consider myself to be a music fanatic. Right now my perception is being rocked to its very core and will continue to do so as I progress with the CI.

I finally heard some more on the 120 channel processor, and its official name is the Harmony. The difference between the Harmony and the Auria, other than the programming strategies, is that the Harmony will have a built-in T-Mic, and be a “power miser” to deal with the battery drain issues.

I excitedly anticipate seeing what it looks like. I cannot wait to stick it onto my head, and have even more fine control over music and the sound spectrum!

As for listening with the iPOD with the HA, I’m about ready to break the door that covers the electrodes off the HA. It is so difficult to plug in the audio boot because it requires a fine amount of dexterity and lots of patience. There’s this whole complicated process to putting it on, but it makes me prefer the audio boot of my old HA, which I could just snap on quickly. It’s frustrating, because the industrial design side of me, wants to take my training out and put it to use redesigning it! The design flaw makes it neither functional nor practical and is an annoyance when I’m moving about and it doesn’t stay put.

On Tuesday, Apple announced several new iPODs, one of which has a capacity of 80-gb! My wish has been answered! There’s something out there that will fit my entire music collection, which currently tops out at 42-gb. Yippie!

 

“You turned it up like WOAH!”

“You turned it up like WOAH!” – Mandy

I had my first mapping at school today. This morning, Mandy and I were joined by Catherine and Don (the other audiologists that I have worked with), who wanted to sit in on the mapping session. The majority of the CI students have Cochlear, and AB makes up about less than 1/4 of the population here (unconfirmed for 2006-2007), so we had to spend a bit of time getting re-familiar with the program.

The best part of the mapping session today…my brain is definitely ready to utilize the CI!

Mandy, Catherine, and Don were deciding how to best program me, based on my reports that I’ve been making over the last three weeks, and Megan’s reports/programs. The other part that they were curious about is why I had the lower frequencies turned up high, but not the upper frequencies. Speech clarity could be an issue because the upper frequencies were missing/not as strong as the lower frequencies, so they wanted to see what would happen with my brain, if we adjusted it.

So Mandy did speech bursts testing, which is the same as the “beep test” that Megan used to do, except it fired multiple electrodes at once. It was at this juncture that we realized that what I was hearing was soft to moderately soft. This explains why speech has started to sound more distorted over the last week, resulting in frustration for me.

I didn’t have to adjust the lower frequencies as much, but I really adjusted the upper frequencies, and I am pleased to report that speech is starting to sound AMAZING with the CI, in the few short hours that I’ve been programmed. It is also starting to balance out the hearing aid now, which is a great relief to me, because I was worried that I was going to have to go without the HA, because I should be listening with the CI, not the HA.

Don/Catherine both told me that one other person has mentioned the radio playing in their head. Apparently in the past, CIs used to have RF interference, but it shouldn’t be happening today with the newer models. And feeling like that I’m hearing in my non-implanted ear when I don’t wear my HA, has also happened to a few other people. So that’s some burning questions answered that I was very curious about! I was warned that these new programs may drain the battery even faster, so I need to continue with the battery log.

I hung out in the common area while they had their department meeting, because I had speech therapy 2 hours later, and tried to do homework, but was filling out paperwork instead. After their meeting, Mandy chatted with me while she ate her lunch. It was fun getting to talk about non-audiology related stuff. She’s so cool.

My exciting news from Monday…I was asked to be in a film that the school is making, and part of it has to do with Cochlear Implants and the shoot is tomorrow. Mandy just told me exactly what it’s going to be used for, and it wasn’t originally what we thought it was. Eeek. *nervous* We’re both talking about how we have to look extra-pretty tomorrow, because we are GIRLS who like to look good!

 

Listening Ability? How Does One Learn To Listen? Friday, August 25, 2006

It’s only been almost 3 weeks, and I’m already worried about the CI and my listening ability.

I wonder if I’m doing the right things to maximize my potential. Am I listening to the right stuff? Am I doing the right kinds of things to try and maximize my speech perception? All these types of questions and thoughts have been swirling around in my head.

I’ve been so used to doing therapy, therapy, therapy, and getting feedback from what I’m doing, that right now I feel like that what I do, on my own, isn’t helping. Everything I do has a visual component to it. It’s difficult to watch TV or read along with books, because I fall back on my “hearing aid” training, and use my vision more than using my brain to listen and understand what is being said. At the same time, I’m not getting the reinforcement of “yes, what I heard or thought I heard is indeed correct.”

On Monday, Susan said that I need to write for myself and not for others (where have I heard that before?). She wants me to write a daily log of my adventures in sound, and what I’m hearing, so that I can look back in 6 months and go “wow, that was a really rough time, but look where I am now and at what I’m hearing! YAY ME!”

I do well with the words in a list format, but have trouble with sentences. Mom did word lists with me, after I saw Megan earlier this week, and she started a new category of vegetables with me. However, I got it the hard way, instead of “mushroom”, “lettuce”, “tomato”, I was getting “portabella mushroom”, “bibb lettuce”, and “roma tomato”. That’s pretty much standard for our house, as we get different kinds of specific veggies for my guinea pig. However, I did get “jalapeno” right on the first try! She’s also been reading my favorite childhood book, “Cars, Trucks, And Things That Go” to me for listening practice. I love that book so much.

Today, I saw Megan for #6 and we tinkered around with the speech program some more. I have trouble with “C” and “M”, and hearing the first part of a word. I also told her about my concerns with listening. I know I don’t have patience (especially for somebody my age, as I was reminded by my dad on activation day!) and want more! She brought out the other computer that had the Sound and Beyond program that was made by Cochlear Americas. I got to play with it for awhile, and it was fantastic. I loved how if you get a word wrong, it repeats the correct word and the wrong word, so you can compare it.

This kind of program is right up my alley because it has a similar concept as the Touch&Tell that I had as a kid. What can I say, I love hands-on learning! It is awfully expensive, 290 dollars, but it might be an investment well worth making if it will help me, and I did enjoy using it…I could have played with it all day if I was allowed to.

I got 76% on the words when we played with the computer, she said I was doing pretty well for just under 3 weeks. The other cool thing this program does is that it plays music and then lets you pick which instrument produced that melody. I was able to get the piano and the xylophone right. But when it came to the violin, ughhhhhhhhh it sounded horrible! And I used to play the violin! But the piano sounded much better (after 10+ years of playing, I should have an ear for it).

Electrode #13 doesn’t have that special sound for me anymore. It’s so weird, because it sounded nothing like it did the last time. Megan did the beep test again today, and Electrode #6 sounded exactly like my mom’s old car alarm (park avenue) when it goes off. Now I have a way to describe what it sounds like to those who can’t hear what’s in my head!

T-Mic Hook: P1+2 – speech, P3 – 70/30 mix for DC.
DC Hook: P1+2 – iPOD only, P3 – 70 iPOD/30 environment.

I also have a battery log that Megan created so I can find out if I have a bad battery, a bad “charging slot” or if it’s just the program that is draining my battery fast on the CI. I do have powerful programs on my CI which draws a lot of power off the battery. My 18 hour battery is only lasting 12 hours, and I was totally unprepared for that the other day when the CI battery died on me.

 

The Two-Week Mark: Itchiness and Mapping Session #5 Monday, August 21, 2006

Dr. M says that the itchiness is a result of the humidity/heat that we’ve been having here the last week. Of course, having something new and foreign next to your skin doesn’t help. I took off the interchangable accent colors for a few days, and it seems to help with the itchiness. The redness comes from me scratching my pale skin, and I’ve been trying very hard to avoid doing that.

I’m loving my CI more and more each day, as it starts to sound more realistic. I’m still missing details, and I can’t wait till I start to hear sounds in its “wholeness” instead of in its current state, which is hard to describe.

I’m starting to be able to hear/understand what the captioning doesn’t cover, like on TV commercials, where they say “Monday at 8 pm” or have a graphic with the words, but don’t caption what is being spoken. The other discovery is the huge lag between what is being spoken and what I can read on the TV.

Last Friday, I went out to the Yard House with my mom and her coworkers to celebrate Brian’s graduation from the MBA program. We sat outside by the entrance, and I could still hear the invididual voices at our table with the CI, as compared to hearing one big glob of chaotic noise with the left ear.

It’s hard to believe that it’s been exactly 14 days since I’ve been activated. Today, I had my 5th mapping session with Megan, and instead of doing all the beep testing, we focused on fine-tuning the implant. I wanted to get more clear speech sounds, so we spent today’s session working on that. I had to sit and listen to her read from a list of words, with her face covered by a black screen, so I couldn’t lipread her, and didn’t have a piece of paper to read off of.

We started with the animals, and I did pretty well with that (Except for tiger (was only getting the “ger”)). Then we did fruit, and I got almost all of them right, except for Mango and Blueberry, which was stumping me for awhile, because it sounded like “rooberry” (that should have been a clue right there!) My problem with a lot of the words (such as peach, cherry, blueberry, mango, tiger, cat, shee/p/t) they don’t sound right, but I can hear enough of the word to tell what it is.

During the mapping session, I could hear Kim out in the hall, her chair moving around, talking on the phone, going through papers at her desk, and was amazed, because each time I come there, I’m hearing more and more of the little sounds that make up the real world. Eventually we had to close the door, because it was too distracting for me to be able to figure out speech sounds and filter out the real world. That’s going to be a big challenge for me. The other thing that I learned is that people are lazy with their speech! “Button” is a perfect example of that. There are words that I know they’re supposed to sound like this, but when I actually hear them, they don’t sound like it, because people leave sounds out! It’s like the dialect of different areas, and ways of speech!

Music is starting to sound much more real to me (pre-implant). There are several songs that just don’t sound right, and others that sound like they did before, if not a tiny bit better. I’m hearing more of the vocals in the song as opposed to the melody, which is cool. I’m excitedly anticipating what the 120-channel processor is going to sound like, if I’m getting these results with the Auria.

Today’s random link
Dangerous Decibels: How Loud Is Too Loud?