Oliver Eyton-Williams ee1e8abd87
feat(curriculum): restore seed + solution to Chinese (#40683)
* feat(tools): add seed/solution restore script

* chore(curriculum): remove empty sections' markers

* chore(curriculum): add seed + solution to Chinese

* chore: remove old formatter

* fix: update getChallenges

parse translated challenges separately, without reference to the source

* chore(curriculum): add dashedName to English

* chore(curriculum): add dashedName to Chinese

* refactor: remove unused challenge property 'name'

* fix: relax dashedName requirement

* fix: stray tag

Remove stray `pre` tag from challenge file.

Signed-off-by: nhcarrigan <nhcarrigan@gmail.com>

Co-authored-by: nhcarrigan <nhcarrigan@gmail.com>
2021-01-12 19:31:00 -07:00

2.7 KiB
Raw Blame History

id, title, challengeType, videoUrl, dashedName
id title challengeType videoUrl dashedName
594faaab4e2a8626833e9c3d 使用转义标记字符串 5 tokenize-a-string-with-escaping

--description--

编写一个函数或程序,可以在分隔符的每个非转义事件中拆分字符串。

它应该接受三个输入参数:

字符串 分隔符字符 转义字符

它应该输出一个字符串列表。

拆分规则:

由分隔符分隔的字段将成为输出列表的元素。应保留空字段,即使在开始和结束时也是如此。

转义规则:

“Escaped”意味着出现一个尚未自行转义的转义字符。当转义字符位于没有特殊含义的字符之前时它仍然被视为转义符但不会做任何特殊操作。用于转义某些内容的每次出现的转义字符都不应成为输出的一部分。

证明您的函数满足以下测试用例:给定字符串

一个^ | UNO || 3 ^^^^ |四^^^ | ^夸| 
和使用
 | 
作为分隔符和
 ^ 
作为转义字符,您的函数应输出以下数组:

 ['one | uno',“,'three ^^''four ^ | quatro',”]
  

--hints--

tokenize是一个函数。

assert(typeof tokenize === 'function');

tokenize应该返回一个数组。

assert(typeof tokenize('a', 'b', 'c') === 'object');

tokenize("one^|uno||three^^^^|four^^^|^cuatro|", "|", "^")应返回[“one | uno”“”“three ^^” ,“四个^ | cuatro”“”]“)

assert.deepEqual(tokenize(testStr1, '|', '^'), res1);

tokenize("a@&bcd&ef&&@@hi", "&", "@")应返回["a&bcd", "ef", "", "@hi"]

assert.deepEqual(tokenize(testStr2, '&', '@'), res2);

--seed--

--after-user-code--

const testStr1 = 'one^|uno||three^^^^|four^^^|^cuatro|';
const res1 = ['one|uno', '', 'three^^', 'four^|cuatro', ''];

// TODO add more tests
const testStr2 = 'a@&bcd&ef&&@@hi';
const res2 = ['a&bcd', 'ef', '', '@hi'];

--seed-contents--

function tokenize(str, sep, esc) {
  return true;
}

--solutions--

// tokenize :: String -> Character -> Character -> [String]
function tokenize(str, charDelim, charEsc) {
  const dctParse = str.split('')
    .reduce((a, x) => {
      const blnEsc = a.esc;
      const blnBreak = !blnEsc && x === charDelim;
      const blnEscChar = !blnEsc && x === charEsc;

      return {
        esc: blnEscChar,
        token: blnBreak ? '' : (
          a.token + (blnEscChar ? '' : x)
        ),
        list: a.list.concat(blnBreak ? a.token : [])
      };
    }, {
      esc: false,
      token: '',
      list: []
    });

  return dctParse.list.concat(
    dctParse.token
  );
}