This is a training for shorten your words. Writing craps could help us make enough words in a short time. But now we need to focus inside: cut off the sentences or descriptions that are not related to your writing objects. Training this with three phrases: write 300 words to describe one thing, then shorten it to 200 words, and finally 100 words. Compare these three versions you can find the strange parts in your words.

**Phrase A - 300 words**

In order to compute the similarity between the low dimensional projections and high dimensional source datasets, we create a way to connect them. Since we know that the process of dimensionality reduction is like a black box, so it is hard to find out the match point of the source dataset in projections. Thus we used an inverse way to build the matching.

In 2D projections, we can combine the pixels information with every point. Then, for each point, we can find the matching point or items in the source dataset. Because we have all information with the source data, we can achieve this by programming. After doing that, we connected the points in low and high dimensions. Next, what we did is to compute similarity. Based on the same idea, we still focus on the low dimensional projection. We select one point/pixel, and compute a circle area around this point When we get the neighborhoods, we find all of their matching points. Finally, we used Euclidean distance to compute these two sets of points.

Besides, considering the process of computing are almost same as the variance computing, so we also create methods based on variance. Now, we have built a new brilliant mechanism to compute similarity. The next step is to test the way to compute distance.

We choose more than 10 datasets and made a set of experiments for the case study later. After getting our explanations, we tried to explain what our explanations showed. From our test, we found a suitable range for parameters for most cases we tested.

<< Read more articles in https://www.cxmoe.com >>

**Phrase B - 200 words**

We create a way to compute the similarity between the low dimensional projections and high dimensional source datasets. Because the process of dimensionality reduction is like a black box, so we used an inverse way to build the matching.

Because we have all information with the source data, we can find the matching point or items in the source dataset. After doing that, we connected the points in low and high dimensions. Next, we compute similarity. based on the same idea. We select one point/pixel, and compute a circle area around this point When we get the neighborhoods, we find all of their matching points. Finally, we used Euclidean distance to compute these two sets of points.

Besides, considering the process of computing is similar with the variance computing, so we also create methods based on variance. Now, we have finished the way to compute similarity.

We choose more than 10 datasets and made a set of experiments for the case study later. After getting our explanations, we tried to explain what our explanations showed. From our test, we found a suitable range for parameters for most cases we tested.

**Phrase C - 100 words**

We create a way to compute the similarity between the low dimensional projections and high dimensional source datasets. We used an inverse way to build the matching.

Because we have all information with the source data, we can find the matching point or items in the source dataset. We select one point/pixel, and compute a circle area around this point when we get the neighborhoods, we find all of their matching points. Finally, we used Euclidean distance to compute these two sets of points.

Besides, we also create methods based on variance. We choose more than 10 datasets for the case study later. After getting our explanations, we tried to explain what our explanations showed. From our test, we found a suitable range for parameters for most cases we tested.

## Comments