1 00:00:08,080 --> 00:00:10,540 OLIVIER DE WECK: One of the key ways of learning 2 00:00:10,540 --> 00:00:13,210 is to look at examples, past examples, 3 00:00:13,210 --> 00:00:16,960 of both successful and failed projects or designs. 4 00:00:16,960 --> 00:00:18,880 What we find in systems engineering 5 00:00:18,880 --> 00:00:23,590 is that often the problems begin at the requirements stage, 6 00:00:23,590 --> 00:00:28,540 where either a requirement is missing, or there's redundancy, 7 00:00:28,540 --> 00:00:30,100 or in some cases requirements are 8 00:00:30,100 --> 00:00:31,960 written that are overly ambitious 9 00:00:31,960 --> 00:00:35,900 and utopian given the time frame and the budget available. 10 00:00:35,900 --> 00:00:39,130 So what you have is always a trade off between how ambitious 11 00:00:39,130 --> 00:00:41,380 you are technically, the schedule, 12 00:00:41,380 --> 00:00:43,510 and the budget that's available, and how much 13 00:00:43,510 --> 00:00:45,260 risk you are willing to take. 14 00:00:45,260 --> 00:00:47,410 And so the students are pushing the envelope 15 00:00:47,410 --> 00:00:50,860 in these different dimensions and learning what's feasible, 16 00:00:50,860 --> 00:00:53,650 what's not feasible, and what can we 17 00:00:53,650 --> 00:00:55,750 learn from past failures. 18 00:00:55,750 --> 00:00:58,600 We can learn from general examples of projects 19 00:00:58,600 --> 00:01:02,230 that had difficulties, either in aerospace, 20 00:01:02,230 --> 00:01:05,740 in the automotive industry, in consumer products. 21 00:01:05,740 --> 00:01:07,690 In hindsight, one is always smarter. 22 00:01:07,690 --> 00:01:10,120 But it is worth to look at these. 23 00:01:10,120 --> 00:01:12,640 Specifically with the CanSat competition, 24 00:01:12,640 --> 00:01:17,350 since the competition has a rich history, and a very active set 25 00:01:17,350 --> 00:01:20,660 of sponsors, and quite a good website as well, 26 00:01:20,660 --> 00:01:22,330 there's a lot of opportunities to learn 27 00:01:22,330 --> 00:01:25,810 from mistakes that other teams have made in the past. 28 00:01:25,810 --> 00:01:30,490 For example, designing a glider where there's not 29 00:01:30,490 --> 00:01:34,330 sufficient tolerances between the vehicle itself 30 00:01:34,330 --> 00:01:38,080 and the container, and basically the deployment doesn't go well. 31 00:01:38,080 --> 00:01:41,740 Another example is designing a glider 32 00:01:41,740 --> 00:01:45,130 that does not fly properly in a circular pattern, 33 00:01:45,130 --> 00:01:49,150 and flies too well actually, and flies off into the woods never 34 00:01:49,150 --> 00:01:51,480 to be seen again. 35 00:01:51,480 --> 00:01:53,650 There's, of course, a lot of lessons learned 36 00:01:53,650 --> 00:01:56,470 from establishing a stable communications link, 37 00:01:56,470 --> 00:01:58,810 from having worked out procedures. 38 00:01:58,810 --> 00:02:00,310 How do you deploy your system? 39 00:02:00,310 --> 00:02:01,549 How do you test it? 40 00:02:01,549 --> 00:02:02,840 How do you communicate with it? 41 00:02:02,840 --> 00:02:04,060 How do you collect it? 42 00:02:04,060 --> 00:02:05,620 How do you post-process the data? 43 00:02:05,620 --> 00:02:07,870 In each of those critical areas, we 44 00:02:07,870 --> 00:02:10,030 see that learning from the past is 45 00:02:10,030 --> 00:02:12,760 critical to avoid mistakes that could be 46 00:02:12,760 --> 00:02:16,230 avoided with these learnings.