KnowBrainer Speech Recognition
Decrease font size
Increase font size
Topic Title: Correcting misrecognized words and phrases in context: how much context?
Topic Summary: How many words constitutes "in context"
Created On: 03/29/2009 01:09 PM
Status: Post and Reply
Linear : Threading : Single : Branch
Keyword
 03/29/2009 01:09 PM
User is offline View Users Profile Print this message


Alan Cantor
Top-Tier Member

Posts: 4280
Joined: 12/08/2007

Last week, I demonstrated the art of fine-tuning NaturallySpeaking to a group of 40 experienced DNS users. I created a new user, did the usual tweaks, and proceded to dictate. Accuracy was outstanding. I wanted to show how to correct misrecognized words in context. 

But there was a problem. DNS refused to misrecognize anything.

So I dictated a long blast, maybe 60 words without pausing, and finally, DNS made one small mistake somewhere in the middle of the utterance. I issued the "Correct That" command. The Spell dialog appeared, with nine variations of the 60 word utterance, all of which were too long to fit in the dialog. This was an unusual situation for me to be in; under normal conditions, I dictate a phrase or sentence at a time.

If correcting in context is recommended, how much context does DNS actually need to improve the language model? My guess is that there is no benefit to correcting in the context of very long utterances. If someone dictates entire sentences or paragraphs without pausing, it is not even practical to correct entire utterances. It takes too long.

Is it OK to select a few words before and after a misrecognized word? And if so, how many words? Is one word in each direction enough? Three words? Do punctuation marks count as words? How about the normal formatting commands that are interspersed with dictated text such as "cap" and "no-space?" Is there a rationale for the number of words based on how the act of correcting misrecognitions modifies the language model?

 03/29/2009 01:36 PM
User is offline View Users Profile Print this message

Author Icon
R. Wilke
Top-Tier Member

Posts: 7809
Joined: 03/04/2007

Alan,

the HMM (Hidden Markov Model) n-grams underlying the models of DNS have turned into quadgrams so far, meaning, for each word up to four words before and after the word will be considered contextwise, in theory, of course. So, let's say that as long as you don't dictate phrases or whole sentences with at least nine words in context, the quadgrams wouldn't be even be triggered, if it weren't for the word right in the middle of the utterance.

Therefore, I would say that it really shouldn't matter how long the utterance will be, the longer, the better, in theory.

But, I'm really wondering how in this world you could manage to enter an utterance of 60 words, was it 60?, in one piece, because normally, DNS will interrupt you when it thinks it's time for doing so.

Rüdiger

 



-------------------------



No need to buy if all you want to do is try ...

DragonCapture KB Download (Latest)
DragonCapture Homepage

 03/29/2009 02:53 PM
User is offline View Users Profile Print this message


Alan Cantor
Top-Tier Member

Posts: 4280
Joined: 12/08/2007

Quote:
But, I'm really wondering how in this world you could manage to enter an utterance of 60 words, was it 60?, in one piece, because normally, DNS will interrupt you when it thinks it's time for doing so.

Hi Rüdiger,

I do not usually dictate that many words as a single utterance, unless I am reading something. But if I know what I want to say, I manage to say a lot before I need to take a breath.

I am not sure what you mean that DNS interrupts. After dictating a long utterance of, say, 60 words, I pause, and the text appears on the screen one or two seconds later. (I often dictate with the "Results Box" and "Dragon Bar" hidden; maybe that makes a difference?)

 03/29/2009 02:58 PM
User is offline View Users Profile Print this message

Author Icon
David.P
Top-Tier Member

Posts: 638
Joined: 10/05/2006

Usually for correction you should select the entire utterance simply by placing the cursor inside (or beside) the wrongly recognized word. For long utterances, I usually either resize the Spell That Correction Box to fit the entire length of the utterance, or I use single word correction.

As NaturallySpeaking's language model does take into account the corrected word's context -- no matter whether you select the surrounding n-grams or whether you simply select the wrongly recognized word by itself -- it really doesn't matter what selection you make for correction (as long as you don't *manually* select a number of words since chances are that these words span two utterances in which case NaturallySpeaking won't learn anything from your correction).

David.P

-------------------------

Sennheiser MKH Mic
Visual & Acoustic Feedback + Automatic Mic Control

 03/29/2009 04:54 PM
User is offline View Users Profile Print this message

Author Icon
R. Wilke
Top-Tier Member

Posts: 7809
Joined: 03/04/2007

Quote:
I am not sure what you mean that DNS interrupts. After dictating a long utterance of, say, 60 words, I pause, and the text appears on the screen one or two seconds later.

Alan,

I'm really at a loss following this - I mean dictating 60 words in one utterance isn't something I could usually do, if at all. I might be able to do this if I was reading canned text, provided I have the breath to do this, which I doubt. But, doing the freeform dictation which I do more than 99.9 % of the time, I couldn't do it.

However, even with my style of dictation, varying from single worded or two or three word phrases up to whole sentences, if at best, I usually experience that DNS takes its pauses following its own scheme, which means it doesn't always pause corresponding to when I tend to pause or slow down, but it may spit out its text even while I'm still dictating the rest of the phrase/sentence. But this is something, of course, which I can only experience when dictating continously, because otherwise, any break in dictation will immediately result in text being displayed on the screen.

By going to the recognition history, I can see the portions of text as they are displayed on the screen, exactly separated one by one. Does this mean, that in ýour case with up to 60 words being processed at a time, there are 60 words in your recognition history on one line?

Rüdiger

 



-------------------------



No need to buy if all you want to do is try ...

DragonCapture KB Download (Latest)
DragonCapture Homepage

 03/29/2009 05:26 PM
User is offline View Users Profile Print this message


Alan Cantor
Top-Tier Member

Posts: 4280
Joined: 12/08/2007

Quote:
By going to the recognition history, I can see the portions of text as they are displayed on the screen, exactly separated one by one. Does this mean, that in ýour case with up to 60 words being processed at a time, there are 60 words in your recognition history on one line?

 

 

Yes. Here is an example of a 43 word utterance.

 03/29/2009 05:49 PM
User is offline View Users Profile Print this message

Author Icon
R. Wilke
Top-Tier Member

Posts: 7809
Joined: 03/04/2007

Alan,

that's amazing! What's your setting of the "Speed vs. Accuracy" slider?

Rüdiger

 



-------------------------



No need to buy if all you want to do is try ...

DragonCapture KB Download (Latest)
DragonCapture Homepage

 03/29/2009 06:15 PM
User is offline View Users Profile Print this message

Author Icon
Chucker
Top-Tier Member

Posts: 14123
Joined: 10/10/2006

Rüdiger,

I can duplicate Alan's results very easily even though I generally don't dictate in phrases longer than nine or 10 words anyway.  I usually confine my utterances to full sentences. What Alan does, if he does it well, is an art and a skill not possessed by most users.  However, I would make two observations here.

1.  In my case, the speed vs. accuracy slider is at 100%.  Yet, I get the same results if I dictate 40 to 60 words without pausing, or rather without DNS detecting a pause.  DNS is extremely accurate under this condition (i.e., virtually 100% most of the time).  Is how DNS is designed to work.

2.  Also, in my case, such utterances are displayed within half a second or less with the speed vs. accuracy slider set to 100%.

It's relatively easy to do this, if you can't think of anything to say spontaneously containing that many words, by simply reading several paragraphs from a magazine or newspaper article without pausing.  When I demonstrate DNS, I always ask someone in the audience to pick out an article from a magazine at random and I dictate it from beginning to end with as few pauses as possible (i.e., only to take a breath).  DNS gets it 100% accurate almost all the time.

In addition, as David P. points out, and just to augment what he's saying, the bigram, trigram, quadgram HMM models don't care how long an utterance is.  These models are applied across every word in an utterance according to the specific model.  Each word in an utterance becomes a target word and the right and left context is analyzed accordingly.  Obviously, no more than four words either side of the target word can be analyzed for context at any one given point in time, but the end result is the combined results of all context analysis no matter how long the utterance is.  I think everyone is thinking in terms of these HMM models being applied to a specific selection of words after which the model is no longer applied.  However, context analysis is conducted word by word through the entire utterance with each word being selected as a target word and the appropriate HMM model applied.  So, correcting as 63 word utterance would be handled just as well as correcting a three word utterance.  The only difference being that the 63 word utterance is more likely to be 100% accurate because of the way that the Language Model is applied, as well as the fact that DNS is designed to recognize longer utterances more efficiently and effectively producing greater accuracy.  Dictating in short choppy phrases tends to produce less accurate results.

One additional point.  If your Acoustic Model is good, you can get virtually 100% accuracy dictating one word at a time just as we used to do in discrete speech recognition.  However, if the Acoustic Model is not good (accurate) you will get many errors dictating one word at a time discreetly.  This is one way to test just how well your Acoustic Model is performing.

Chuck Runquist
Owner, GEMCCON - The Choice of Intelligence
Speech Recognition Consulting and Training

I know that you believe you understand what you think I said, but, I am not sure you realize that what you heard is not what I meant.



-------------------------

VoiceComputer: the only global speech interface.

The views, thoughts and opinions expressed in this post are my own and do not reflect those of VoiceTeach LLC.

Chuck Runquist
VoiceComputer technical support

 03/29/2009 06:36 PM
User is offline View Users Profile Print this message

Author Icon
R. Wilke
Top-Tier Member

Posts: 7809
Joined: 03/04/2007

Quote:
If your Acoustic Model is good, you can get virtually 100% accuracy dictating one word at a time just as we used to do in discrete speech recognition. However, if the Acoustic Model is not good (accurate) you will get many errors dictating one word at a time discreetly. This is one way to test just how well your Acoustic Model is performing

Chuck,

this is definitely true, but just as long as there are no homophonic utterances involved.

Quote:
the bigram, trigram, quadgram HMM models don't care how long an utterance is. These models are applied across every word in an utterance according to the specific model. Each word in an utterance becomes a target word and the right and left context is analyzed accordingly.

Again, this is also my understanding of how the models work, but also, and please correct me if I'm wrong, that utilization of the models (n-grams) is purely restricted to the span of a single utterance, as being recognized as such by DNS. Because otherwise, dictating in context obviously wouldn't make any difference as opposed to dictating word by word, would it?

Rüdiger

 



-------------------------



No need to buy if all you want to do is try ...

DragonCapture KB Download (Latest)
DragonCapture Homepage

 03/29/2009 07:11 PM
User is offline View Users Profile Print this message

Author Icon
Chucker
Top-Tier Member

Posts: 14123
Joined: 10/10/2006

Rüdiger,

Now you're cooking.  You're right on the money.  The n-gram models are applied as appropriate.  For example, if you dictate a single word, no Language Model is used and the recognition is done entirely via the Acoustic Model.  If you dictate two words, the bigram model is applied even though there is no right context relative to the second word and no left context relative to the first word as each word is selected as the target word.  If you dictate three or more words, you can see how the progression works as you get up to nine words or more in an utterance, and it simply continues word by word as each word is selected as the target word until you reach the end of the utterance.

I was thinking about my previous post trying to figure out a simple example.  I managed to come up with this while I was reading your response to my post.  It's not perfect but it explains some of the basics.

Let's say that you dictate a phrase that ends up after the application of the Acoustic Model being "to a much".  By itself, that would be the end result after the application of the bigram model.  However, let's suppose that the entire phrase contained additional words before "to" and after "much".  For the sake of analysis let's just simply take what occurs after the word "much".  If what you say after that word is "greater degree" then the Language Model as applied would return the result "to a much greater degree".  However, if what you say after the word "much" is "fatty food" or something similar, then the Language Model would take "to a much fatty food" and replace it with "too much fatty food".  If in addition, what precedes "to" is "Italian diets contain all" and the Acoustic Model presents "Italian diets contain all to a much", then the likelihood is that the Language Model will present the initial result as "Italian diets contain all too much" and the results of analyzing the rest of the phrase after "much" which contains "fatty foods", then the end result would present itself as "Italian diets contain all too much fatty foods".  And this context highest confidence level in the Language Model.

On the other hand if you dictated "Italian diets contain all too much", the context of that utterance alone would produce the correct result when applied against Language Model if the Acoustic Model presented in different result simply because the analysis of of the context of that phrase contained sufficient indicators for DNS to properly assign the word too vs. 2, or to, or two.  On the other hand, by itself simply saying "two" generally presents that result because that has the highest coefficient of probability as a word by itself and it is, as you point out, a homophone.  Therefore, DNS interprets the word as "two" because that word has the highest confidence score in the Acoustic Model.

Still further, if you dictate something that is nonsensical, such as "He has very much bravado," DNS will likely interpret that exactly as spoken because there is too little information from which to extract an appropriate context result.  DNS would say, if it could speak, What the hell do you mean 'very much bravado'.  It has insufficient information from which to extract an appropriate context from the Language Model.  Therefore, DNS assumes that is exactly what you mean or what you wanted to say (i.e., speech recognition can't interpret meaning, so it leaves the phrase as is).  If you had ever used IBM's ViaVoice, you might have seen what happens when the speech engine tries to make a judgment call.  In ViaVoice, a phrase such as this might have been responded to by a query to the effect of "Not understood."  That's a little too aggressive.  So, DNS tries to balance without assuming.  That's the simplest way to put it.  Nevertheless, all of this is only part of the whole story.

Chuck Runquist
Owner, GEMCCON - The Choice of Intelligence
Speech Recognition Consulting and Training

"At times we shall simply have to admit that, one way or another, what we can neither explain nor understand certainly doesn't cease to exist because we cannot see how it does or why it should." - Dr. Mark Hyman



-------------------------

VoiceComputer: the only global speech interface.

The views, thoughts and opinions expressed in this post are my own and do not reflect those of VoiceTeach LLC.

Chuck Runquist
VoiceComputer technical support

 03/30/2009 03:46 AM
User is offline View Users Profile Print this message

Author Icon
David.P
Top-Tier Member

Posts: 638
Joined: 10/05/2006

This is a very good explanation about how the language model applies the n-grams throughout the entire length of any utterance. However the following:

 

If the Acoustic Model is not good (accurate) you will get many errors dictating one word at a time discreetly. This is one way to test just how well your Acoustic Model is performing.


and

 

For example, if you dictate a single word, no Language Model is used and the recognition is done entirely via the Acoustic Model.


is not correct.

Of course, when dictating one word at a time, there is no way for NaturallySpeaking to know anything about the words that eventually will follow; which is why any following words can't be taken into account in order to improve the recognition of the currently dictated single word.

However, even when dictating one word at a time, NaturallySpeaking DOES take into account the preceding context, and applies the language model for every single word that you dictate (provided preceding words have been dictated, or are already present in your document).

 

Try it by opening a document, and dictate the following (saying one word at a time):

  • The ... item ... costs ... one ... sent
  • The ... shipment ... is ... going ... to ... be ... cent
  • He  ... is ... sitting ... on ... a ... bored
  • He ... was ... very ... board
  • He ... is ... not ... aloud
  • You ... must ... read ... out ... allowed

 

[Make up your own examples]

What you will see is, NaturallySpeaking will get the last word of the sentence right almost every time although you try to say the wrong word -- even when dictating one word at a time.

 

This proves that the language model is applied even when doing single word dictation (subject to the restrictions initially mentioned). Note that this even works if you don't dictate but type (!) everything but the last word of the respective sentence -- as long as you use an application which is fully Select-And-Say enabled (= fully context-aware) like WordPad, Notepad, Word or DragonPad.

 

The only way you can actually completely disable the language model and dictate only by applying the acoustic model is to dictate one word at a time in a non Select-And-Say application, AND to click with the Mouse (or hit a key on the keyboard) between any two words you say. If you don't click the mouse and don't hit a key on the keyboard in between any two words you say, the language model will be applied again, even in non-Select-And-Say enabled applications.

 

David.P



-------------------------

Sennheiser MKH Mic
Visual & Acoustic Feedback + Automatic Mic Control



 03/30/2009 10:35 AM
User is offline View Users Profile Print this message


Alan Cantor
Top-Tier Member

Posts: 4280
Joined: 12/08/2007

Quote:
Try it by opening a document, and dictate the following (saying one word at a time):

* The ... item ... costs ... one ... sent

* The ... shipment ... is ... going ... to ... be ... cent

* He ... is ... sitting ... on ... a ... bored

* He ... was ... very ... board

* He ... is ... not ... aloud

* You ... must ... read ... out ... allowed

 

[Make up your own examples]

What you will see is, NaturallySpeaking will get the last word of the sentence right almost every time although you try to say the wrong word -- even when dictating one word at a time.

 

This is fascinating! This is good evidence that the language model is active, even to a limited degree, when dictating only one word at a time.

 03/30/2009 12:07 PM
User is offline View Users Profile Print this message

Author Icon
Chucker
Top-Tier Member

Posts: 14123
Joined: 10/10/2006

David,

Essentially I don't disagree with you. It is obvious that some form of context recognition is taking place. Where I disagree with you is that it is neither necessary nor sufficient condition, based on your examples, that it must therefore be a byproduct of the application of the Language Model.

I'm not disputing your results, I'm simply pointing out that your analysis doesn't absolutely point to the Language Models and only takes into consideration one set of examples (variables).

Here are some inconsistencies with your analysis, and please understand that I'm not trying to start a shooting match. What I'm trying to do is to see if between the two of us we can figure out exactly what's happening. As an example, note the following:

He sent a letter costing two cents

Each word dictated discreetly with a significant pause between each produces the above. By the same token the following phrase with each word dictated individually with a significant pause produces the following.

The   sent   of   flowers   filled   the   air

The above results are not consistent with your analysis. That is, dictating each of the above words individually with a pause between them should be:

The scent of flowers filled the air

However, it obviously doesn't come out that way when you dictate each word one at a time, pausing after each. Here's another example where your analysis doesn't seem to apply:

Two  cents  of  flowers  filled  the  air

According to your explanation it should come out as follows:

Two scents of flowers filled the air

However, the immediate left context preceding "cents" being "two" makes sense in terms of that context combination.  That is, it is very infrequent that the average user would dictate "two scents of flowers filled the air".  In addition, some of these phrases dictated as one utterance are not always correctly interpreted (recognize) even when the full utterance is applied across the Acoustic Model and Language Models.  For example, "two sense flowers filled the air" can be the result until corrected.  But even here, sense and scents are homophones and can cause difficulty anyway resulting in inconsistent results.

Here is a third set of examples:

common sense

two cents

dollars and sense                            (the value was translated into dollars and cents – Language Model application)

the boy cried foul

some birds are water fowl

all birds are considered foul          (not necessarily inconsistent because the context is "considered foul", not "birds are considered..." indicating that whatever context analysis is occurring is limited to bigram's)

go to the beginning of the wine

felony  time        felony  crime        felony  line       felony  lime

don't    do    the    rhyme    if    you    can't    do    the   line

the     clock   doesn't   show   the   right   crime

What these examples show is that there does appear to be some basic bigram context relationships with certain words. On the other hand, some of these phrases are not corrected automatically when the words are dictated individually pausing between each word so that each word is a single utterance. In other cases, selecting phrases like dollars and sense and re-dictating them produces the correct results. Nevertheless, what these examples do show is that some bigram combinations to produce the correct results when each word is dictated individually followed by at least a 1 second pause. On the other hand, the context isn't detected, such as when you say "go to the beginning of the wine ". So, Language Model application isn't the full or complete explanation. However, some two word combinations can be included in the Acoustic Model (base model) based on common word associations. For example, "common sense" will virtually never come out "common cents" even when the words are dictated one at a time pausing between each. My point is simply that Language Model application is not either the only or necessarily part of the explanation.

From the above example it would appear that some form of bigram context analysis is being performed under SOME conditions. However, based on what I know about acoustic and language models, my guess would be that there are certain forms of bigram contexts built into the Acoustic Model. Remember that you start out with a base Acoustic Model compiled from a large corpus of speech data (i.e., what you get when you select "None" for training when creating a new user), which is known to compiled based on many thousands of speakers and tens of millions of words compiled into a base HMM model.

I would welcome your explanation of these inconsistencies based on all the previous statements that you have made relative to such.  It would be interesting to see it explained from a different perspective.  Mine obviously tends to be colored by my understanding of the underlying functionality of the speech models.

Chuck Runquist
Owner, GEMCCON - The Choice of Intelligence
Speech Recognition Consulting and Training

Logic 101: Post Hoc fallacy - Post Hoc Ergo Propter Hoc: After the fact, therefore because of the fact.  The general assumption that a consequent event or observation maintains an absolute cause and effect relationship to its precedent is false on its face.  While precedent and consequent events or observations may have a direct cause and effect relationship, the absolute assumption of such is invalid, and therefore false on its face.



-------------------------

VoiceComputer: the only global speech interface.

The views, thoughts and opinions expressed in this post are my own and do not reflect those of VoiceTeach LLC.

Chuck Runquist
VoiceComputer technical support

 03/30/2009 03:51 PM
User is offline View Users Profile Print this message

Author Icon
David.P
Top-Tier Member

Posts: 638
Joined: 10/05/2006

The described phenomenon essentially consists of two parts: a) application of the Language Model, but of course b) only regarding what's LEFT of the cursor…!

Needless to say, examples like "The sent of flowers filled the air" can't possibly work at all, since -- as already pointed out (it's as well self-evident if you think about it for a second) -- after you dictated as far as "The sent…." NaturallySpeaking of course doesn’t know whether you're going to carry on with some story about flowers, or about money.

Instead, of course the sentence must BEGIN with flowers, in order for NaturallySpeaking to write "scent" and not "cent" or "sent" afterwards.

The described phenomenon also is not the result of a "bigram context relationship in the acoustic model". There is no such thing as a bigram relationship in the acoustic (!) model, other than a number of multi-word entries people might have in their vocabulary (a.k.a results of "make that a phrase" ). Any utterance either constitutes a single word, then its outcome is the result of vocabulary and Acoustic Model recognition, or the utterance is composed of several parts of an n-gram relationship between several words, then it's the result of vocabulary, Acoustic Model AND Language model recognition. We really should not go to the extent of suddenly saying that e.g. "common sense" never coming out as "common cents" is a result of the Acoustic (!) Model. Rather and of course, the proper dictation result in this case is the very essence and merit of the Language Model.

Further, almost all of my above examples clearly show that it is not only the second to last word (bi-gram-wise) that makes the end of the sentence come out right, but mostly the entire sentence left of the last word.

But what's more, as already stated in my last post, the context adjustment of the respective last word that has been dictated (single-word-wise) also takes place if the sentence before that last word HAS BEEN ONLY TYPED -- and not dictated.

If not already the former, then clearly the latter is last and irrefutable proof that it is of course the Language Model that makes the last word always come out right as showed in above examples, not the Acoustic Model -- since you didn't even dictate more than that very SINGLE last word in the first place.

NaturallySpeaking in this case simply checks the words left of the cursor, and adjusts the single word that you dictated at the cursor location accordingly -- using only the Language Model, since of course there is no acoustic information whatsoever, about the adjacent words, since they have been typed not dictated.

David.P



-------------------------

Sennheiser MKH Mic
Visual & Acoustic Feedback + Automatic Mic Control

 03/30/2009 06:21 PM
User is offline View Users Profile Print this message


rjwilmsi
Member

Posts: 76
Joined: 08/24/2008

I'm reading this topic with a great deal of interest, but unfortunately most of the information seems to be going over the top of my head. One thing I did wish to clarify was whether Dragon NaturallySpeaking requires that corrections be made on entire utterances only, and that therefore manually selecting a handful of words around the incorrect word and then opening the correction menu (as opposed to using the "correct that" command with the cursor placed on the appropriate word) would not lead to Dragon NaturallySpeaking "learning" properly?
 03/31/2009 02:57 AM
User is offline View Users Profile Print this message

Author Icon
David.P
Top-Tier Member

Posts: 638
Joined: 10/05/2006

Quote:
One thing I did wish to clarify was whether Dragon NaturallySpeaking requires that corrections be made on entire utterances only, and that therefore manually selecting a handful of words around the incorrect word and then opening the correction menu (as opposed to using the "correct that" command with the cursor placed on the appropriate word) would not lead to Dragon NaturallySpeaking "learning" properly?

rjwilmsi, the simplest and best method always is the latter (the one in bold in your quote). Only in case the utterance is too long to be handled properly in the correction menu/spell dialog (i.e. too many alternatives and/or utterance doesn't fit in the correction window) then you might want to select and correct only the wrongly recognized word by itself.

If on the other hand you manually select any number of words > 1 then chances are that the words will span two utterances. In this case, you'll not get 'playback on correction', and what's more, NaturallySpeaking will not learn anything from the respective correction.

David.P



-------------------------

Sennheiser MKH Mic
Visual & Acoustic Feedback + Automatic Mic Control

Statistics
32285 users are registered to the KnowBrainer Speech Recognition forum.
There are currently 0 users logged in.
The most users ever online was 12124 on 09/09/2020 at 04:59 AM.
There are currently 516 guests browsing this forum, which makes a total of 516 users using this forum.

FuseTalk Standard Edition v4.0 - © 1999-2022 FuseTalk™ Inc. All rights reserved.