Mestrado em Engenharia Elétrica

URI Permanente para esta coleção

Nível: Mestrado Acadêmico
Ano de início:
Conceito atual na CAPES:
Ato normativo:
Periodicidade de seleção:
Área(s) de concentração:
Url do curso:

Navegar

Submissões Recentes

Agora exibindo 1 - 5 de 385
  • Item
    Gesture recognition for prosthesis control using electromyography and force myography based on optical fiber sensors
    (Universidade Federal do Espírito Santo, 2025-09-17) Ramirez Cortés, Felipe; Segatto, Marcelo Eduardo Vieira ; https://orcid.org/0000-0003-4083-992X; http://lattes.cnpq.br/2379169013108798; Díaz, Camilo Arturo Rodríguez ; https://orcid.org/0000-0001-9657-5076; http://lattes.cnpq.br/2410092083336272; https://orcid.org/0009-0008-7465-3887; http://lattes.cnpq.br/0873671321556842; Silveira, Mariana Lyra ; https://orcid.org/0000-0002-0368-5629; http://lattes.cnpq.br/5307116832176112; Silva, Jean Carlos Cardozo da ; https://orcid.org/0000-0003-2310-9159; http://lattes.cnpq.br/9949032159595994
    Amputation is the partial or total loss of a limb. It is a challenging event that affects people worldwide, with an estimated prevalence of 552.45 million in 2019 and a growing rate. The loss of an upper limb, in particular, strongly affects a person’s ability to perform activities of daily living (ADL), communicate, and interact with their environment. To restore lost functionality, assistive devices known as prostheses have been developed. Modern active prostheses can be controlled by interpreting the user’s movement intention through various biological signals, such as Surface Electromyography (sEMG), which measures the electrical activity of muscles. While sEMG is an established and predominant control method, it has limitations. Forcemyography (FMG) is a technique that measures changes in muscle volume and pressure during contraction. It has emerged as a promising alternative, offering advantages such as greater signal stability and reduced sensitivity to skin conditions like sweat. This master’s dissertation proposes and evaluates a hybrid sensor system combining FMG and sEMG to create a more robust and precise method for hand gesture classification. The system integrates a custom-developed FMG sensor, which uses a Fiber Bragg Grating (FBG) embedded within a flexible 3D-printed structure, with a commercial sEMG sensor. The primary goal is to improve the control of real and virtual prosthetic hands for amputees. The study involved recording signals from able-bodied subjects while they performed tasks involving different hand angles and grip forces. Data from the sEMG, FMG, and the combined hybrid system were used to train and test seven different machine learning algorithms, with the dataset split into 80% for training and 20% for testing. Results showed that the optimal sensing strategy is task-dependent. For angle classification, the hybrid FMG-sEMG sensor achieved the highest accuracy of 85.62% with the K-Nearest Neighbors (KNN) classifier. For force classification, the sEMG sensor alone was superior, reaching an accuracy of 92.53% with a Support Vector Machine (SVM). Furthermore, the hybrid system’s feasibility for real-time application was validated in a Virtual Reality (VR) environment, where it achieved 99.83% accuracy in classifying binary open/close hand gestures. This research demonstrates the complementary nature of FMG and sEMG signals, concluding that a multimodal approach can be used to develop more sophisticated, reliable, and intuitive control systems for upper-limb prostheses by selecting the best sensing modality for the desired task
  • Item
    Virtualização de ambientes em tempo real para interação multimodal : teleoperação com uso de realidade mista e feedback háptico
    (Universidade Federal do Espírito Santo, 2025-09-06) Vieira, Igor Batista; Mello, Ricardo Carminati de; ttps://orcid.org/0000-0003-0420-4273; http://lattes.cnpq.br/1569638571582691; Frizera Neto, Anselmo; https://orcid.org/0000-0002-0687-3967; http://lattes.cnpq.br/8928890008799265; https://orcid.org/0009-0007-9547-4485; http://lattes.cnpq.br/; Rodríguez Díaz, Camilo Arturo; https://orcid.org/0000-0001-9657-5076; http://lattes.cnpq.br/2410092083336272; Alsina, Pablo Javier; https://orcid.org/0000-0002-2882-5237; http://lattes.cnpq.br/3653597363789712
    Conventionalteleoperationinterfaces, basedontwo-dimensionalmonitorsandnon immersive controllers, present significant limitations for human-robot interaction. The absence of depth perception, restricted field of view, and high cognitive load hinder the operator’s ability to build an accurate mental model of the remote envi ronment, reducing the effectiveness of robot control. In this context, the integration of immersive technologies and haptic devices emerges as an alternative to enhance the user’s sense of presence, overcome perceptual barriers, and make teleoperation more natural and efficient. To address these challenges, this dissertation proposes the development of a multi modal teleoperation system, composed of a mobile robotic platform equipped with perception sensors, a simultaneous localization and mapping (SLAM) module, and an immersive interface based on Virtual Reality integrated with a haptic device. The architecture was designed to operate in a distributed manner, with processing shared between the robot and the operator station, enabling the construction of a low-latency digital twin. Two experimental studies were conducted: the first vali dated the accuracy of the visual mapping system compared to classical approaches, while the second evaluated the haptic interface in user teleoperation tasks. The results obtained confirmed the hypothesis that the combination between Vir tual Reality and haptic feedback provides a telepresence experience superior to tra ditional solutions. The system demonstrated robustness in environment mapping, low response time in data transmission, and an increased sense of immersion re ported by the users. Specifically, the user study demonstrated that the immersive interface was able to reduce the average number of collisions from 3.00 to less than 0.3 and decrease the perceived workload (NASA-TLX) by more than 50%. These f indings highlight the potential of the proposed approach as a relevant contribution to the advancement of robotic teleoperation, with possible applications in remote inspection, hazardous environments, and human-robot collaboration systems.
  • Item
    Fusão de dados para a localização e navegação de robôs móveis em espaços inteligentes programáveis baseados em visão computacional
    (Universidade Federal do Espírito Santo, 2025-08-20) Oliveira, Matheus Dutra de; Mello, Ricardo Carminati de; https://orcid.org/0000-0003-0420-4273; http://lattes.cnpq.br/1569638571582691; Vassallo, Raquel Frizera; https://orcid.org/0000-0002-4762-3219; http://lattes.cnpq.br/9572903915280374; https://orcid.org/0009-0002-4548-8065; http://lattes.cnpq.br/5802812159654028; Cordeiro, Rafael de Angelis; https://orcid.org/0000-0002-9094-3365; http://lattes.cnpq.br/1957732976527194; Fernandes, Mariana Rampinelli; https://orcid.org/0000-0001-8483-5838; http://lattes.cnpq.br/6481644695559950
    The estimation of mobile robot localization in indoor environments is one of the central challenges of autonomous navigation. Among the main techniques used to address this problem are Multi-View Visual Odometry, obtained through a multi-camera network, and Monte Carlo Localization. Both approaches have limitations: areas without camera coverage render navigation unfeasible when relying solely on visual odometry, while symmetric environments hinder convergence in the Monte Carlo method. Aiming to overcome these issues and achieve a more robust and reliable localization estimate, this work proposes the combination of these two global localization techniques through a data fusion approach based on Kalman Filter methods (Extended Kalman Filter and Unscented Kalman Filter). Additionally, the integration of the smart space architecture with the Robot Operating System (ROS) is adopted to implement this fusion. As a result, the fused localization can be integrated into the ROS navigation stack, leading to a complete localization and navigation system, and allowing the system to be triggered by other components of the smart environment. The system was evaluated in critical scenarios and case studies conducted in real environments. The results indicate that the information fusion effectively addresses the inherent limitations of each localization source, while increasing the robot’s global orientation accuracy by up to 12% and improving localization estimates by more than 5.2% when both sources are available.
  • Item
    Análise de aplicações em ambientes reais de IoT utilizando tecnologias de radiofrequência Wi-SUN e LoRaWAN
    (Universidade Federal do Espírito Santo, 2025-06-28) Cerri, Hudson Mereles; Segatto, Marcelo Eduardo Vieira; https://orcid.org/0000-0003-4083-992X; http://lattes.cnpq.br/2379169013108798; https://orcid.org/0009-0005-3713-594X; http://lattes.cnpq.br/7902825869094261; Munaro, Celso Jose; https://orcid.org/0000-0002-2297-7395; http://lattes.cnpq.br/5929530967371970; Santos, Jessé Gomes dos ; https://orcid.org/0000-0001-8984-0599; http://lattes.cnpq.br/6857610972823488
    This dissertation describes some applications in practical cases using two RF technologies, combining available literature with real tests carried out with IoT devices from the company Zaruc, as well as evaluating the behavior of sub-GigaHertz frequency protocols applied to electronic energy meters telecommunications by Zaruc Tecnologia and, in some cases, comparing, when pertinent, to theoretical results obtained through Radio Mobile. Among the technologies currently present in the Internet of things market for utilities, it is intended to work with Wi-SUN and LoRaWAN, seeking a model of analysis of both data collection systems for energy measurement linked to observation in real experiments, with qualitative and quantitative reflections through relevant performance indexes described in this work. At the end of this research, the goal is to provide a critical view on the empirical applications of each of the technologies, serving as an aid to decision-making in determining the ideal protocol to meet specific objectives within the field of electric energy data telemetry
  • Item
    Effects of human-robot interaction in smart walker-assisted locomotion using mixed reality scenarios
    (Universidade Federal do Espírito Santo, 2025-01-01) Loureiro, Matheus Penido; Frizera Neto, Anselmo; https://orcid.org/0000-0002-0687-3967; http://lattes.cnpq.br/8928890008799265; http://lattes.cnpq.br/5058609108829074 ; Lima, Eduardo Rocon de; https://orcid.org/0000-0001-9618-2176; http://lattes.cnpq.br/6623746131086816 ; Alsina, Pablo Javier; https://orcid.org/0000-0002-2882-5237; http://lattes.cnpq.br/3653597363789712
    The decline in neuromusculoskeletal function in older adults can significantly affect their motor control, independence, and walking ability, ultimately reducing their quality of life. With the global aging population on the rise, supporting independent mobility and enhancing rehabilitation techniques have become critical goals. The use of augmentative devices, such as smart walkers (SW) can help providing mobility assistance and enhancing residual movement capacity. SWs stand out among these devices by offering active physical support, fall prevention, as well as cognitive and navigation assistance. Despite these improvements, people may still experience frustration due to repetitive tasks and discomfort during recovery, which can lead to higher dropout rates in rehabilitation programs. In this context, integrating rehabilitation devices with mixed reality (MR) tools offers a promising approach for gait training and rehabilitation, potentially improving clinical outcomes, motivation, and adherence to therapy. However, concerns about MR-induced cybersickness and potential changes in gait patterns remain. This master's dissertation investigates the gait parameters of fourteen elderly participants under three conditions: free walking (FW), SW assisted gait (AG), and SW assisted gait combined with MR assistance (AGMR). Kinematic data from both lower limbs were captured using a 3D wearable motion capture system to evaluate the kinematic changes associated with SW use and how MR integration may influence these adaptations. Additionally, cybersickness symptoms were assessed using a questionnaire after the AGMR condition. The results reveal significant kinematic differences between FW and both AG and AGMR,with reductions in sagittal plane motion of 16%, 25%, and 38% at the hip, knee, and ankle, respectively, in both AG and AGMR compared to FW. However, no significant differences were observed between AG and AGMR gait parameters, and no MR-related adverse effects were reported. These findings suggest that MR can be effectively incorporated into walker-assisted gait rehabilitation without negatively impacting kinematic performance, while offering potential benefits for motivation and therapy adherence.