Pages in topic:   < [1 2 3 4 5 6 7]
"Mini-contest" launched (on new "beta" contests platform)
Thread poster: Henry Dotterer
Henry Dotterer
Henry Dotterer
Local time: 04:39
SITE FOUNDER
TOPIC STARTER
Winners visible in 30 pairs Nov 16, 2012

Hi all,

Winners have been determined in 30 pairs. Congratulations to the winners!

In pairs with winners, discussion has been made possible -- for the entire pair, and for each entry.

In four pairs, there were just one or two entries, so voting was never held. Discussion has been enabled in those pairs, as well, and those who are in a position to evaluate the submissions are encouraged to provide feedback to the authors. If you see something you liked about
... See more
Hi all,

Winners have been determined in 30 pairs. Congratulations to the winners!

In pairs with winners, discussion has been made possible -- for the entire pair, and for each entry.

In four pairs, there were just one or two entries, so voting was never held. Discussion has been enabled in those pairs, as well, and those who are in a position to evaluate the submissions are encouraged to provide feedback to the authors. If you see something you liked about their translations, please consider sending a message.

In 30 additional pairs, we have not yet met the 3-voters per entry threshold. We are close in many cases, so voting has been extended for two weeks. Discussion is not possible in those pairs.
Collapse


 
Roland Nienerza
Roland Nienerza  Identity Verified

Local time: 09:39
English to German
+ ...
translation amalgam - ups and downs Nov 19, 2012

[quote]Katalin Horváth McClure wrote:

Roland Nienerza wrote:


Your objection to the idea of extending the segment by segment rating from this contest with a compound of individual remarks


Roland,
I am not talking about the voting.
I am talking about the "experiment" or "new feature" what Henry calls "composite best" whereby the system creates a complete "translation" using individual sentences taken from different submissions.



Sorry, Katalin, for having misread your comment.

I completely agree with you that the idea of an amalgam or hybrid concoction of loose ends taken from different submissions for a "composite best" sounds, on first hearing, rather odd for any professional of the craft.

Although, on second or third thought, I could imagine that it may work in a way. In my not so young age now it occurs quite sometimes to me that in occasionally having a look into some classic literature I see isolated sentences or passages where I could imagine a bit of "better writing" even in a masterpiece of . -

That is to say that on second thought Henry's idea could well be "workable". In a way, it could be even a very extraordinary, rather ground-breaking, experiment of no less than - I hope Henry will not object to the definition - clearly Frankensteinian dimensions.- But, given that the selection of the haphazard elements will be based on the actual ranking established by the community, the experiment would indeed have a good degree of objectivity.

Yet this would then really presuppose voting for a "flowing text" on a sentence by sentence basis. Also somewhat new. But maybe worthwhile trying.



[Edited at 2012-11-19 20:52 GMT]


 
Roland Nienerza
Roland Nienerza  Identity Verified

Local time: 09:39
English to German
+ ...
After the "beta" platform's fist edition Nov 19, 2012

As this very interesting new concept for the contests has now come to a nice conclusion for a good number of combinations it may be time for an overall evelauation.

I said already that it took me a little to see the plausability of voting on a segment by segment basis. This is indeed creating a good degree of objectivity by avoiding at the same time excessive discussions of inidividual sentances as they occurred in the last preceding contest editions. - It is clear that such a break
... See more
As this very interesting new concept for the contests has now come to a nice conclusion for a good number of combinations it may be time for an overall evelauation.

I said already that it took me a little to see the plausability of voting on a segment by segment basis. This is indeed creating a good degree of objectivity by avoiding at the same time excessive discussions of inidividual sentances as they occurred in the last preceding contest editions. - It is clear that such a breaking down of a source text along sentences is only feasible in these "finger practice" exeercises of contests with rather short source texts. This approach could certainly not easily be extended to larger subject material.

But I would raise one point for the discussion.

From what I see there is almost across the board quite a difference between the averages for "segment" and "entity" obtained by a submission. And this is largely due to the fact that "segments" got regularly hundreds of votes and "entities" only single digits.

In my combination the winner got these results -

Rating type Overall Quality Accuracy
Segments 3.05 3.08 (197 ratings) 3.01 (205 ratings)
Entry 3.95 4.40 (5 ratings) 3.50 (4 ratings)

That is a rather low rating for "segments" from about 200 voters.

The runner-up had -

Rating type Overall Quality Accuracy
Segments 3.88 3.86 (170 ratings) 3.91 (180 ratings)
Entry 3.73 3.75 (8 ratings) 3.71 (7 ratings)

The "segment" voting here shows a very high superiority. - Indeed, this "segment" voting for the runner-up is one of the hightest among all about thirty submissions in this pair. While the "winner" got even one of the very lowest votings on "segments" in this pair.

But the "winner" had a strange peak of ranking for the "entry" by 4 and 5 voters - which brought that winning piece into the finals - although, as said, many other submissions were way better on "segments". - And in the finals the winning piece was selected again by a relatively small number of voters.

This shows for me a clear contradiction in the voting system as proposed in this "beta" platform. - For me it does not make sense that a winning piece, particularly in this case, with isolated sentences, rates clearly at the lower end of all submissions with regard to "segment" quality. It is rather obvious for me that this type of averaging two very different rating lines is not admissible. It simply cannot be that a piece gets a high ranking on "segments" but finishes low in "overall ranking" and vice versa.

If the "segment" ranking can be easily squashed by the "entry" ranking in the voting phase, and again by an "overall" ranking in the finals phase, it gets useless to have a "segment" rating at all. - If that would mean to choose between either "segment" rating or "overall" rating I would clearly opt for the "segments". Because in my view an otherwise quite nice submission with one, two or more very grave mistakes should not win on "overall".

I think that the "segment" vote is the right approach and shold be followed up. Without the contradictive effect of the "entry" or "overall" vote taking the upper hand.
Collapse


 
Katalin Horváth McClure
Katalin Horváth McClure  Identity Verified
United States
Local time: 04:39
Member (2002)
English to Hungarian
+ ...
Just a note, Roland Nov 20, 2012

I am sorry, it is too late for me to read and understand everything you write about the numbers and averages, etc., I just wanted to point out that there were 14 segments, so if 10 people gave stars to your 14 segments, you got 140 ratings right there.
This explains how it is possible that the segments ratings are in the hundreds, while the entry ratings are single digits.

Also, the point system was different in the finals than in the qualifying round, so I am not sure why you
... See more
I am sorry, it is too late for me to read and understand everything you write about the numbers and averages, etc., I just wanted to point out that there were 14 segments, so if 10 people gave stars to your 14 segments, you got 140 ratings right there.
This explains how it is possible that the segments ratings are in the hundreds, while the entry ratings are single digits.

Also, the point system was different in the finals than in the qualifying round, so I am not sure why you are referring to the qualifying points of the finalists, how it is relevant.

In the finals people had to pick a 1st a 2nd and a 3rd place entry, and I am sure most people did that based on the overall impression of the entries. It is entirely possible that someone who produced good translations for almost all segments messed up very badly on one or two of them, and that caused the voters to not give the person a medal. I know in my language pair there was one entry that contained one colossal mistranslation and it was so bad that even if the rest was perfect, the entry had no chance of winning. Similarly, if there are fairly equal quality translations, but one entry has a brilliant solution to one of the segments, that can make that entry the winner. People incorporate these impressions into the voting, when they pick 1st, 2nd and 3rd. If only the segment averages were counted, an entry with a gross mistranslation can still accumulate more points than one that had no such mistakes, but maybe a few small ones.

This segment based evaluation, where you compare X number of translations of a single sentence taken out of the text IMHO would be totally counterproductive for normal texts.
I have no idea what prompted abandoning the platform that was used for previous contests, but I think it worked all right, and I think the tagging feature was particularly useful, much better than this. At least as I remember, although the memory may be fading as the last normal contest was a long time ago (3 years, I think).

Katalin

[Edited at 2012-11-20 04:36 GMT]
Collapse


 
Roland Nienerza
Roland Nienerza  Identity Verified

Local time: 09:39
English to German
+ ...
search of fairness and competence - Nov 20, 2012

Thanks, Katalin, for your additional input to this.

I reply to it somewhat in detail not so much in order to contrast your views, which on the contrary I am largely sharing, but rather in the hope to give some more fodder for further brain-storming to the Contest framers.

Katalin Horváth McClure wrote:

I am sorry, it is too late for me to read and understand everything you write about the numbers and averages, etc., I just wanted to point out that there were 14 segments, so if 10 people gave stars to your 14 segments, you got 140 ratings right there.
This explains how it is possible that the segments ratings are in the hundreds, while the entry ratings are single digits.


Sure, ok. This has occurred to me too after posting the above. - In addition there seems to have been some sporadic and random rating of some segments by people who did not care to rate an entire entry and this may have led to more disproportion.

Also, the point system was different in the finals than in the qualifying round, so I am not sure why you are referring to the qualifying points of the finalists, how it is relevant.

In the finals people had to pick a 1st a 2nd and a 3rd place entry, and I am sure most people did that based on the overall impression of the entries.


That's what I had clearly seen. But I do not like it.

I considered it unfair that in my combination 7 finalists had been chosen without showing their segment ranking, in order to - as Henry said - avoid bias in the final vote. - At the same time segment voting for the non-finalists was still going on. And when the final ranking was closed the segment voting became evident and it appeared a clear disrepancy between the winning piece and the runner-up or runners-up on segment voting.

The hitch in this beta edition was that the finalists were chosen on a somewhat obscure average mix of segmental and "gut" voting - see Henry's comments on this above. It was only this kind of averaging that brought the winning piece in my combination into the finals. On segment voting alone it would not even have made it up to there. - And so, by a strong vote for "averall impression" the piece got to the finals. And there it got another strong "thumbs up" by obviously the same people and won also it had been ranked third or fourth best on segments.

It is entirely possible that someone who produced good translations for almost all segments messed up very badly on one or two of them, and that caused the voters to not give the person a medal. I know in my language pair there was one entry that contained one colossal mistranslation and it was so bad that even if the rest was perfect, the entry had no chance of winning.


And it would indeed most probably not have won if there would have been voting on "overall impression".

Similarly, if there are fairly equal quality translations, but one entry has a brilliant solution to one of the segments, that can make that entry the winner.


But in that case it could have won in either line, "segments" as well as "overall".

People incorporate these impressions into the voting, when they pick 1st, 2nd and 3rd. If only the segment averages were counted, an entry with a gross mistranslation can still accumulate more points than one that had no such mistakes, but maybe a few small ones.


Correct. - Which shows again the contraditions of the amalgam voting process.

As far as your above example is concerned there appears another hitch in the present voting scheme. - I had already suggested that the point range 1 to 5 is rather narrow and the most negative voting to be expressed is giving no points at all. I think it could be worthwhile considering to allow ratings from - 5 to + 5, so that minus ratings would have a bigger impact and even undo positive votings. (s. above your comment on colossal mistakes).

This segment based evaluation, where you compare X number of translations of a single sentence taken out of the text IMHO would be totally counterproductive for normal texts.


That has been my first reaction to this idea too. - It is clear that such a concept of breaking a source text into individual sentences could at best be practicable only for short texts. But that is what the contest material so far has been and understably will remain to be. - And I have indeed come around to see the interest to have such a more differentiated and somewhat more objectivating approach than just the "overall impression".

I have no idea what prompted abandoning the platform that was used for previous contests, but I think it worked all right, and I think the tagging feature was particularly useful, much better than this. At least as I remember, although the memory may be fading as the last normal contest was a long time ago (3 years, I think).


I too liked the tagging feature of the last 2 or 3 contests before Yogi Berra Beta. - But I had observed that the tagging system had a tendency to snowball and mushroom into a no more manageable gaggling and bickering cacophony and I therefore take it that the Contests framers and shapers wished to streamline the process.

Yet, on winding up. - As I see it, it will not be very practicable to try to combine "segment" voting and "overall impression" when either of these can have the effect of undoing the other. - And if one system has to be chosen I would go for "segments" as the more differentiated procedure.

Maybe one could consider a "segment" vote with an additional handicap. I.e. the possibilty to qualify a segment as an "unpardonable, submission ruining miss", maybe called a UP , that would lead to automatical disqualifying of a post if e.g. five other people would agree to such a UP. - BTW. I saw in the finalist group of my combination such a case of a "UP" which in my view shold have meant exclusion of that post from voting. Yet, the piece became a finalist, probably on the ground of "overall impression."


 
Roland Nienerza
Roland Nienerza  Identity Verified

Local time: 09:39
English to German
+ ...
Voting on target alone makes no sense in a translation contest Nov 21, 2012

Henry Dotterer wrote:

Roland Nienerza wrote:
If someone translates "the sky is blue" into "the "sky is green" or "pink" I would never give anything else than the lowest rate on either, "accuracy" and "quality of writing", no matter how precise the spelling of "green" and "pink" would be. If the translation is wrong, quality of writing does not matter any more.


But consider that it is possible for -- and the contest platform now allows -- a person who is native in the target language to rate the quality of writing of an entry, without considering the accuracy of translation. (A native speaker of English can therefore rate the quality of writing of a Turkish to English translation, without understanding Turkish.) In principle, I believe this possibility should make the contests better.


Voting just on the quality of target writing without regard to the source would be voting on English or other target language composition, copy writing or story telling. It would bear no relationship any more to a translation contest.

No one should be eligible to vote for a combination who has not registered himself as being knowledgeable of source and target of that same combination.

[Edited at 2012-11-21 22:44 GMT]


 
Shai Navé
Shai Navé  Identity Verified
Israel
Local time: 10:39
English to Hebrew
+ ...
When will the rest of the languages be closed and announced? Dec 13, 2012

I specifically asking about Hebrew (in which I have participated). If this is a case of not enough voters, could at least the existing ranking and feedback made available even without declaring a winner?

Thank you.


 
Łukasz Gos-Furmankiewicz
Łukasz Gos-Furmankiewicz  Identity Verified
Poland
Local time: 09:39
English to Polish
+ ...
Mini contests? Jun 26, 2013

Is the idea still active? Would be nice to see a small-time context every couple of months or something, in addition to the big ones. Makes me think of the idea of private contests declared by individual members, which I remember reading about somewhere on Proz.com. This could also make sure that more fields are covered, in addition to more pairs.

As for sources that have translations available, how about writing something entirely new specifically for the contest? I'm sure some of
... See more
Is the idea still active? Would be nice to see a small-time context every couple of months or something, in addition to the big ones. Makes me think of the idea of private contests declared by individual members, which I remember reading about somewhere on Proz.com. This could also make sure that more fields are covered, in addition to more pairs.

As for sources that have translations available, how about writing something entirely new specifically for the contest? I'm sure some of us are competent enough writers or know where to find one.
Collapse


 
Daniel Penso
Daniel Penso
United States
Local time: 01:39
Member (2012)
Japanese to English
+ ...
Mini contests and language variety Sep 19, 2013

I agree that there should be more contests and I also believe that there should be more source texts from African and Asian languages (personal preference). I have also yet to come across a target text for Thai speakers.

I am not a native Thai speaker but would like to hone my Thai skills so would be happy if that existed. Translating from Spanish - Thai or Japanese - Thai would be interesting (for me).


 
Pages in topic:   < [1 2 3 4 5 6 7]


To report site rules violations or get help, contact a site moderator:

Moderator(s) of this forum
Lucia Leszinsky[Call to this topic]

You can also contact site staff by submitting a support request »

"Mini-contest" launched (on new "beta" contests platform)






Anycount & Translation Office 3000
Translation Office 3000

Translation Office 3000 is an advanced accounting tool for freelance translators and small agencies. TO3000 easily and seamlessly integrates with the business life of professional freelance translators.

More info »
Trados Studio 2022 Freelance
The leading translation software used by over 270,000 translators.

Designed with your feedback in mind, Trados Studio 2022 delivers an unrivalled, powerful desktop and cloud solution, empowering you to work in the most efficient and cost-effective way.

More info »