iLMS知識社群ePortfolioeeClass學習平台空大首頁登入
位置: 吳俊逸 > AI
Introduction to AI
by 吳俊逸 2020-04-28 22:18:46, 回應(0), 人氣(778)

Q1: How long do you expect the AI to become more commercialized  and hence lowering the price for general company?

Ans: In my opinion, AI will become more commercialized in the next 10 years,. Because the difficulty of AI applications development is geting lower and lower. There are a lot of free AI development tools and open source like python and tensorflow. The computational power of the hardware like GPU has been improved significantly and became more popular in the recent years. But collecting sufficient data source may be the hardest parts of AI commercialization. It may take more time to collect data for the AI commercialization of general companies.

 

Q2: A neurosurgeon / cognitive neurologist once taught: What is activation? Activation is inhibition of inhibition of inhibition ...... Does this make sense in AI Deep Learning using neural network model?

Ans: The activation functions help the network use the important information and suppress the irrelevant data points. So in some ways, it's kind of inhibition of inhibition to make the network activate.

 

Q3: AI can be applied to religion the same as how politics are using AI to find target audience

Ans: AI can be applied to any field only when you can clearly define the features on your potential audience. If u can get the difference from the target audience and non-target audience, you can use AI.

 

Q4: What is ‘transfer learning’ and ‘transfer extraction’?

Ans: We trained a model by using deep learning which could extract features from training data set. Transfer l-learning is also an application of deep learning that extracts the pre-trained model's (already trained by lots of data) weight to other neural network. By transfering these features and weight, we don't need to train an entire network by ourself.

 

Q5: I’m confused. The slide mentions that deep learning is not good for unstructured data but as far as I know its application can be used for image recognition. Does this mean image are structured data?

Ans: Deep learning is not good for unstructured data compared to for structured data. Image is a kind of unstructured data. That why we need to do a lot of feature extraction work before apply deep learning to the image recognition.

 

Q6: How to handle large size (eg full HD) images using CNN?

Ans: First we will resize the image with the details kept, second If the cost permits, you can use a better GPU for calculations.

 

Q7: How does the video data be labelled for input to deep learning i.e. autonomous vehicle?

Ans: In the development of autonomous vehicle technologies, the fusion sensor can integrate like camera, radar, ultrasonic, and lidar to let the machine know and update many environment situations. And how we train the system of autonomous vehicle, there are some points can help. The camera with Deep Learning Based Visual Odometry (DeepVO), Simultaneous localization and mapping, and driving scene segmentation can help localization and mapping. Through the audio with Recurrent Neural Networks to realize the road texture and weather conditions. Deep reinforcement learning gives the ability to deal with under-actuated control, uncertainty, motion blur, lack of sensor calibration or prior map information. The above examples can assist to output the pose and help to receive the most movement planning to arrive at the destination.

 

Q8: What is the essential part that makes the AI algorithm of deep learning smarter than others?

Ans: Hidden layer is one of the most essential part. Multiple hidden layers allow deep neural networks to learn features of the data in a so-called feature hierarchy, because simple features (e.g. two pixels) recombine from one layer to the next, to form more complex features (e.g. a line). Nets with many layers pass input data (features) through more mathematical operations than nets with few layers, and are therefore more computationally intensive to train. Computational intensivity is one of the hallmarks of deep learning, and it is one reason why a new kind of chip call GPUs are in demand to train deep-learning models.

 

Q9: For now, the explainable AI by using deep learning, the explain result is good or not?

Ans: A deep neural network(DNN) that learns millions of parameters and may be regularized by techniques like batch normalization and dropouts is quite incomprehensible. Many other Machine Learning techniques also face this problem. However with DNNs gaining state-of-art performances in many domains and with a big margin, it is hard to make a case against using them.

 

Q10: Have you heard about applying AI in detecting and translating micro-structure on human face e.g. lie catching?

Ans: Yes, AI can help human in many fields. This is one example that plan to use AI for lie detector in Airport: https://www.ft.com/content/c9997e24-b211-11e9-bec9-fdcab53d6959.  

AI can use for translating micro expression of human as long as we have enough data for training the system. In some cases, AI also can beat human because of AI can have consistent result compare with a human. In that example, the system accuracy rate is 80-85 per cent, which far exceeds the average accuracy by humans of 54 per cent.

 

Q11: How to handle large size (eg full HD) images using CNN?

Ans: Exploring and applying machine learning algorithms to datasets that are too large to fit into memory is pretty common. Here are some common suggestions you may want to consider: 1.Allocate more memory by re-configuring your tool or library. 2.Work with a smaller sample before fitting a final model on all of your data by using progressive data loading techniques. 3.Get access to a much larger computer with an order of magnitude more memory by renting compute time on a cloud service like Amazon Web Services. 4.Change data format: use a binary format like GRIB, NetCDF, or HDF so as to speed up data loading and use less memory instead of using raw ASCII text, like a CSV file. 5.Stream data or use progressive loading by using optimization techniques such as stochastic gradient descent.

 

Q12: Example data provided must have both normal and abnormal or not, such as driving well. and driving is not good, then there is an accident Then how to create a bad event?

Ans: Example data is not must have both normal and abnormal data. The most important thing is that example data must provide two types of data, such as car accident caused by drunk driving or fatigue driving. If you want to have the bad event data, you must have to collect large on the roads. The maybe you can collect the bad event data because car accident is common.

 

Q13: How can we set the goal (or direction) for deep learning? It seem like deep learning just classify data into group of something. If we guide (someone feature extract the data and put it in)  the deep learning, It will call machine learning right?

Ans: Deep learning not only can classify data into group, but also have many applications. For example, we can use deep learning to recover a black-white picture to color picture, mimic someone's face movement, reconstruct 3D scene etc.. The goal of deep learning is endless because there are always the better solutions when using different kind of model or the combination of different models, even changing some hyperparameters may get better solution. So, deep learning can help human find some 'new' features which represent to 'old' classified-group in classification problems. Basically right, but if we can extract features by ourself, it means the problem is not that complex and we don't need to use deep learning which spend more time than machine learning.

 

Q14: Do you have any example for Unsupervised learning case in real world, right now?

Ans: Unsupervised learning can solve various business problems. For example, banks can use unsupervised learning algorithms to evaluate whether a transaction is fraudulent. Marketing analysis can use its technology to further optimize web conversion rates.

 

Q15: Could it be possible to utilize AI predict the earthquake?

Ans: Yes, it can. Artificial Intelligence can predict the occurrence of numerous natural disaster, such as earthquakes. Artificial Intelligence system can be trained with the help of seismic data to analyze the magnitude and patterns of earthquakes and predict the location of earthquakes and aftershocks. AI-based systems look for changes in the images to predict the risk of disasters such as earthquakes and tsunamis. Moreover, these systems also monitor aging infrastructure. Artificial intelligence systems can detect deformations in structures, which can be used to reduce the damage caused by collapsing buildings and bridges, or subsiding roads.

 

Q16: What are the AI applications used in automotive assembly and manufacturing industry?

Ans: AI Application for Automotive: Autonomous vehicles — In the automotive industry, autonomous vehicles are the new holy grail. Manufacturers and their technology partners are working overtime to develop AI-driven systems to enable self-driving cars and trucks. These systems incorporate a wide range of AI-enabled technologies, such as deep learning neural networks, natural language processing and gesture-control features, to provide the brains for vehicles that can safely drive themselves, with or without a human driver on board.

AI Application for Manufacturing: Manufacturing — AI enables applications that span the automotive manufacturing floor. Automakers can use AI-driven systems to create schedules and manage workflows, enable robots to work safely alongside humans on factory floors and assembly lines, and identify defects in components going into cars and trucks. These capabilities can help manufacturers reduce costs and downtime in production lines while delivering better finished products to consumers.

 

Q17: how can we know when training data is under training or overtraining?

Ans: Overtraining happens when the model begins to memorize training data instead of learning from it (memorized by heart). This means that it is capable of giving correct predictions for the training data set, but it may perform poorly when applied to unseen data. Underfitting occurs when the model or algorithm does not it the data enough. Undertraining on the other hand, occurs when the model performs poorly on the training data; this happens because the model is unable to capture the relation between input and the target values.

REF: TTAIC & AIGO & III