Updated remark completed: Paper 1A is now marked using the exact answer list you provided in the attached document “Paper”. The report below also recalculates the final ICT score out of 100 using Paper 1 = 65% and Paper 2 = 35%.

HKAICT ICT Mock Examination Report

Detailed report for Paper 1A, 1B, 2A and 2B in interactive HTML and printable PDF layout

Candidate Name
To Chun Hei
Candidate Number
26978210
Components Marked
Paper 1A, Paper 1B, Paper 2A, Paper 2B
Marking Standard
Based on HKDSE ICT knowledge, syllabus expectations and common marking principles

Final Weighted ICT Score

Paper 1 = 65%, Paper 2 = 35%
73.45 / 100
Overall weighted percentage: 73.45%

This is a clearly improved result after re-marking the MCQ accurately. Your overall profile is now stronger than the previous estimate, especially in Paper 1A.

Paper 1A
33 / 40
82.5%
Paper 1B
45 / 60
75.0%
Paper 2A
15 / 30
50.0%
Paper 2B
24 / 30
80.0%
Raw paper totals
  • Paper 1: \(33 + 45 = 78/100\)
  • Paper 2: \(15 + 24 = 39/60 = 65/100\)
Weighted contributions
  • Paper 1 contribution: \(78 \times 0.65 = 50.70\)
  • Paper 2 contribution: \(65 \times 0.35 = 22.75\)
  • Total: \(50.70 + 22.75 = 73.45\)
Overall judgement: Your performance is now a fairly solid paper overall. Paper 1 is good, with a particularly strong MCQ score. Paper 2B is also strong. The main area holding down the final mark is Paper 2A Databases, followed by some precision issues in long questions.

Part 1 — Marking

All questions are marked according to the HKDSE ICT syllabus knowledge and typical marking expectations.

Paper 1A — Multiple Choice: 33 / 40

Q Topic Your Ans Correct Ans Mark Reason
1Data capture errorBB1/1Correct. Entering z as 2 while copying data is a transcription error.
2Sign-and-magnitudeCC1/1Correct. Each coordinate needs 5 bits, so total 10 bits.
3Binary to decimalDD1/1Correct. 10011000 unsigned is 152.
4Parity checkAA1/1Correct.
5QR code vs barcodeCC1/1Correct. QR code has higher error-correction capability.
6Audio qualityCC1/1Correct. Audio quality is affected by bit depth and sampling rate.
7Image formatsDD1/1Correct. SVG is not suitable for storing ordinary photographs.
8Character codingDD1/1Correct. The bytes shown match UTF-8.
9DigitisationAA1/1Correct. A camera is used for digitisation.
10Spreadsheet toolAA1/1Correct. Pivot table is the best tool here.
11Spreadsheet formulaBB1/1Correct. The value is 74.
12SQL LIKEDB0/1Only one record is selected. The correct count is 1.
13Printer driverAA1/1Correct. The OS may already have the driver installed.
14Real-time processingDD1/1Correct.
15Output deviceCC1/1Correct. Plotter is suitable for large precise engineering drawings.
16Internet connectionBB1/1Correct.
17Ports and interfacesAB0/1An external SSD can be connected through Thunderbolt. RJ-45 is a network port.
18MAC addressAA1/1Correct.
19Wi-Fi bandCC1/1Correct. 2.4 GHz is better for larger coverage.
20IPv6DD1/1Correct.
21Router functionsCC1/1Correct.
22Domain namesBB1/1Correct.
23Web video access issueCC1/1Correct.
24Smart cityDA0/1Smart agriculture is the option that is not normally regarded as a smart city initiative here.
25Algorithm tracingAA1/1Correct.
26Algorithm tracingAA1/1Correct.
27Boolean expressionAA1/1Correct.
28FlowchartBB1/1Correct. Output is 900.
29Error typeDD1/1Correct.
30Loop outputDD1/1Correct.
31Loop tracingBB1/1Correct.
32TestingDD1/1Correct.
33Loop equivalenceAC0/1To match the REPEAT...UNTIL behaviour, ALG1 needs an initial B ← B + 1.
34Loop conditionDD1/1Correct.
35Sentinel conditionCA0/1X > 0 controls when the algorithm stops. The closest correct description is that it stops when a negative value is entered.
36Flowchart equivalenceDD1/1Correct.
37Array algorithmDD1/1Correct. The code swaps the first and last elements.
38Copyright protectionBD0/1All three measures can be applied, so the answer is (1), (2) and (3).
39Digital divideCD0/1Giving a free laptop to every citizen without one is the least feasible option.
403D printingCC1/1Correct.

Paper 1B — Section B: 45 / 60

Question Mark Reason
Q14 / 4All parts acceptable. Good decoding, valid storage benefit, and AI application accepted.
Q24 / 4Good basic hardware, OS and wireless security understanding.
Q34 / 4Both technical reasons and both design suggestions accepted.
Q45 / 6Main loss was Q4(b)(i): worm characteristic stated incorrectly.
Q54 / 6Tracing was strong, but reverse reasoning and algorithm purpose were weaker.
Q610 / 12Spreadsheet work was strong; main losses were SQL rigor and mixing up authorisation/authentication.
Q76 / 10Basic array update okay, but escalator algorithm and invalid-test-case reasoning cost marks.
Q88 / 10Good on storage/hardware and lossy MP3 reasoning; weaker on cloud vs virtualisation and audio-quality attribute.

Paper 2A — Databases: 15 / 30

Question Mark Reason
Q11 / 3SQL1 minimum correct. SQL2 minimum/maximum not accepted.
Q22 / 6Some idea of denormalisation and anomaly shown, but SQL syntax and filtering were incomplete.
Q31 / 4Some relationship idea present, but cardinality and optional participation were not handled correctly.
Q43 / 4Table design issues identified well; rollback answer was relevant but not specific enough.
Q58 / 13Some SQL was correct, especially join/index basics, but NULL logic, correlation and aggregation details were weak.

Paper 2B — Web Applications Programming: 24 / 30

Question Mark Reason
Q62 / 2Protocol and ACL purpose both correct.
Q75 / 5Very good UI sketch and correct examples of client-side and server-side scripting.
Q84 / 4All server questions correct.
Q94 / 4Good networking fundamentals.
Q109 / 15Good general web understanding, but metadata, CSS selector logic, and some PHP details cost marks.

Part 2 — Summary

Analysis of performance on MC, Paper 1 long questions, and each question in Paper 2.

MCQ performance summary
  • Paper 1A score: 33/40.
  • This is a strong MCQ performance.
  • You answered most core theory questions correctly across data representation, networking, spreadsheet, flowchart and algorithm tracing.
  • You only lost marks in 7 questions, mainly because of precision rather than lack of knowledge.
  • The wrong questions were concentrated in SQL pattern matching, hardware interface detail, smart city concept, loop equivalence, sentinel description, copyright protection and digital divide feasibility.
Paper 1 long question summary
  • Q1: Strong. Accurate understanding of custom encoding and a valid AI application.
  • Q2: Strong. Clear OS and wireless security basics.
  • Q3: Strong. Good practical website awareness.
  • Q4: Fairly strong. Only the worm concept was wrong.
  • Q5: Mixed. Good tracing, weaker on deeper algorithm interpretation.
  • Q6: Strong. Spreadsheet work is one of your stronger areas.
  • Q7: Weakest question in Paper 1B. Algorithm design and testing logic need improvement.
  • Q8: Moderate to good. Practical ideas okay, but some theory wording was inaccurate.
Paper 2A summary
  • Q1: Weak. Min/max result questions in SQL set/join logic need more careful reasoning.
  • Q2: Weak. You had the idea of adding a field and creating a view, but the SQL needed to be more exact.
  • Q3: Weak. ER diagram modelling is a major weak area.
  • Q4: Fair. Table design issues were identified reasonably well.
  • Q5: Moderate. Some join and index work was right, but more complex SQL filtering and NULL logic lost marks.
Paper 2B summary
  • Q6: Strong. Basic protocol and permission control were secure.
  • Q7: Very strong. Your UI design and process examples were good and practical.
  • Q8: Strong. Good understanding of server roles.
  • Q9: Strong. Good DHCP, MAC and addressing fundamentals.
  • Q10: Mixed. You know the general web ideas, but exact syntax and selector detail need tightening.
Overall pattern: Your performance is best when the question is practical, concrete and close to real use. You are weaker when the question needs exact database syntax, exact definitions, or abstract reasoning about conditions and structures.

Part 3 — Mistakes

All wrong questions or main mark-loss areas across every paper, with analysis.

Paper 1A wrong questions

  • Q12: SQL LIKE pattern matching. You likely did not track the underscore position carefully. This is a detail-reading mistake.
  • Q17: Port/device interface knowledge. This is a straightforward factual recall question and should be an easy mark next time.
  • Q24: Smart city initiative classification. This suggests some uncertainty in the “social implications / current applications” part of the syllabus.
  • Q33: Loop equivalence. You can trace algorithms, but transforming one loop form into another is weaker.
  • Q35: Sentinel-controlled loop purpose. You understood that the condition controls the loop, but not the exact role expected in exam wording.
  • Q38: Copyright protection measures. This was a syllabus fact question about software protection methods.
  • Q39: Digital divide feasibility. This is another concept judgment question based on practicality, not merely theory.

Paper 1B wrong / partial-loss questions

  • Q4(b)(i): Worm characteristic was reversed. A worm does self-replicate.
  • Q5(b), Q5(c): You could follow the steps, but you were less confident about reconstructing the input or stating the overall algorithm purpose.
  • Q6(d)(ii): SQL rigor issue. You answered as if the statement had GROUP BY.
  • Q6(e)(ii): Authorisation and authentication were mixed up.
  • Q7(a): You described total passengers entering and leaving, instead of the net onboard passenger count.
  • Q7(c): Algorithm condition and distribution logic were not exact enough.
  • Q7(d): Your invalid test case explanations were too vague.
  • Q8(a)(i): Cloud computing vs virtualisation was not defined accurately.
  • Q8(c)(ii): You did not give the syllabus-expected attribute affecting audio quality.

Paper 2A wrong / partial-loss questions

  • Q1: Join/set reasoning under constraints needs improvement. These are logic questions, not just syntax questions.
  • Q2: The database idea was there, but the SQL form was not safe enough for full marks.
  • Q3: ER diagram was the weakest area. Cardinality and participation constraints need major revision.
  • Q4(b): “Rollback” was relevant but too general. The question wanted specific changes on users and transaction records.
  • Q5(d), Q5(e): NULL logic, correlation and filtering conditions were the main problems.

Paper 2B wrong / partial-loss questions

  • Q10(a): Your CMS disadvantage was not specific enough to the system.
  • Q10(b)(iii): You gave examples of content, not metadata.
  • Q10(c)(i): You missed the detail that id="#hero" does not match selector p#hero.
  • Q10(d)(ii): The row-count condition was reversed.
  • Q10(d)(iii): You used the wrong PHP variable for the fetched row.
Main mistake pattern: Most of your lost marks came from precision errors, not from total misunderstanding. This is good news, because precision can be improved quite quickly with focused practice.

Part 4 — Improvement

Weakness diagnosis and personal feedback for future improvement.

Your main weaknesses
  • Exact ICT terminology: e.g. worm, authorisation vs authentication, cloud vs virtualisation.
  • Database precision: especially GROUP BY, NULL, views, subqueries, and ER diagrams.
  • Algorithm abstraction: you can often trace code, but you are weaker at saying what the whole algorithm is actually doing.
  • Condition logic: loop equivalence, sentinel conditions and boundary checks still cost marks.
  • Detail-checking in web coding: metadata names, CSS selector matching, variable naming and reversed conditions.
Your strengths
  • Strong MC foundation: 33/40 is a very good Paper 1A score.
  • Good practical sense: especially in networking, spreadsheet use and web application scenarios.
  • Better in applied questions than in highly abstract ones.
  • Good marks in Web Applications: this is one of your strongest modules.
  • Capable of improvement: because your mistakes are often fixable precision issues.
Personal feedback
  • You are doing better than the earlier estimate suggested. The re-marked MCQ shows that your fundamentals are actually quite solid.
  • Your strongest message from this paper is: you do know a lot of the syllabus.
  • The next stage is not to relearn everything from the beginning. The next stage is to make your answers more exact, more exam-safe and more precise.
  • If you improve your database section and tighten your exact wording in theory questions, your total can go up noticeably.
  • In other words, your ceiling is higher than this score. The key is refinement, not rebuilding from zero.
Priority 1

Databases

  • Practise SQL with JOIN, GROUP BY, HAVING, NULL.
  • Redo ER diagram questions with cardinality and optional participation.
Priority 2

Terminology

  • Make a revision list of common definition traps.
  • Memorise short textbook-safe definitions, not vague explanations.
Priority 3

Algorithm purpose

  • After tracing an algorithm, always write one sentence: “This algorithm is used to...”
  • Practise condition conversion between WHILE and REPEAT...UNTIL.
Suggested short improvement plan
Focus What to do
Database Redo 15 SQL questions and 5 ER diagram questions. Write the correct answer fully, not just the final line.
Paper 1 theory Revise all 7 wrong MC questions and write one line explaining why the correct option is correct.
Algorithms Practise 10 loop/array questions and focus on “purpose of algorithm” and “condition meaning”.
Web detail Revise HTML metadata, CSS selectors and PHP variable/result handling.
Final comment: With the corrected MCQ, your paper looks much healthier. Your biggest opportunity now is Paper 2A. If you improve database modelling and SQL precision, your final ICT result can rise quite significantly.