For example, the PTE Academic and Versant tests provide an automated, impartial, fair, and timely assessment for oral and written expression tests, regardless of where participants live, their accent, or their gender. One of the challenges faced by human markers in evaluating spoken and written objects is the evaluation of many attributes in a single object. As with oral questions, we do this by using expert human evaluators to evaluate the questions first, scoring them twice. When we installed our AI systems for speaking and writing notes, we tested our articles and trained our engines with millions of student responses, and now we are doing so as new articles are developed. However, once we have trained the AI systems for each oral and written attribute rating, they can immediately and accurately mark the elements for any number of attributes. To do this, we use expert human evaluators to evaluate the elements first, scoring them twice. We then validate the formed engine by introducing many more man-made objects and verifying that machine values are strongly correlated with human values. We then validate the driven engine by introducing many more humanly marked objects and making sure that the machine values are strongly correlated with human values. Once the linguistic model is established, we train the engine to evaluate each element written in a test. To do this, we ensure that our artificial intelligence systems for oral and written expression are trained in a wide range of accents and human spelling. Find out how Versant tests can automate the assessment process and why it’s important to place university students in the right English program. If our AI system does not exactly match the values of human markers, we remove the article because it is important to meet the standard set by human markers. While these scored questions can automatically be used to assess receptive skills, such as listening and reading comprehension, they cannot mark productive writing and speaking skills. This article explains the key processes involved in automated AI scoring and makes it clear that AI technologies are based on the foundations of consistent human expert judgement. These types of questions are designed to test specific skills and automated assessment ensures that they can be scored quickly and accurately at any time. We hope this article has answered some burning questions about how AI is used to assess speech and writing in our language tests.