ON THIS PAGE

Innovative Design of Visual Communication System for Animated Character Graphic Images in Virtual Reality Environment

Zhengyu Yang1
1Guangdong Construction Vocational and Technical College, Guangzhou 510440, Guangdong, china.

Abstract

A novel design of a visual communication system for animated character graphic pictures in a virtual reality environment is offered due to the lack of edge data identification in existing visual communication systems, which results in poor motion continuity of animated characters. This essay looks into the best ways to include graphic image processing into visual communication design. Using methods like picture processing, processing of visual images, and word processing through various processing tools, three design processes of refining the object, graphic image reorganisation, and graphic image refinement processing are used to complete the application of graphic image processing in visual communication design. The Sobel edge operator is also introduced to establish the application of graphic image processing in visual communication design. The motion of the animated character was created by simulation tests using the developed and existing systems, respectively. The effectiveness of the designed system is confirmed by examining the animated character’s motion grasping comparison chart, which reveals that the motion movements created by the designed system are more cogent and that the continuous effect of the position of the limbs in the motion is similar to that of a real person.

1. Introduction

he ability of people to appreciate beauty is always developing, raising the bar for visual communication design. In the design of visual communication, graphic image processing is crucial [1]. As a result of this tendency, many nations have declared the animation sector one of their pillar industries [2]. Cartoon animation is increasingly becoming a significant form of entertainment and a tool for teaching. The animation business in China began off slowly but is now growing quickly and has achieved significant progress in the development and study of new animation production tools. The visual communication of animated characters’ graphic images has also been used in the creation of animation for the virtual reality environment [3]. From the original two-dimensional plane gradually progressed to the present three-dimensional presentation, such a change primarily depends on the visual communication system to create a better reflection of the overall effect of visual communication [4].

A significant area of creative advertising design is visual communication technology. In creative advertising design, virtual reality technology is utilised to integrate many aspects to expand the design language, optimise the layout, generate unexpected artistic effects, and improve the business environment [5]. Advertising practitioners must master both technology and content. They must be able to use the virtual scene in the virtual object model, materials, lighting, and other such elements to create a realistic virtual world, using a variety of association, leveraging, and other technical methods. The advertising industry is currently undergoing transformation. Virtual reality technology is being used to create visual effects that will further catch the audience’s attention and improve the advertising effect. New advertising formats and business models are also being developed [6].

Through multiple interface superiority, design, and detail optimisation processing of graphic images to achieve application software interface optimisation, interaction interface optimisation design is updated in terms of theme, sharing, and ease of development. This design is then incorporated into the conventional mobile phone system interface style, with custom theme colour schemes, set shape and font definitions, and theme editor functions [7]. In the process of gradually replacing human hands with machines, gradually developing painting software, art education and art colleges are also seeking new developments, graphic image design and visual communication technology continues to advance, and virtual technology brings a new revolution to the design of painting illustrations [8,9]. Virtual technology provides quite realistic technology, no longer a simple picture overlay processing mode, but an artificial intel The software offered by virtual reality technology can be used to colour and complement the outlines of draught designs based on hand-drawn graphic pictures in terms of light and dark changes and object matching.

The continuous impact of motion of animated characters is not satisfactory since existing methods do not capture and recognise edge data while creating graphic pictures of animated characters. In order to do this, a novel design for a visual communication system for graphic images of animated characters in a virtual reality setting is offered [8-9]. Utilising graphic components, picture elements, and colour elements in concert to create versatile visual communication works significantly enhances the impact of visual communication.

2. Innovative Design of Visual Communication System for Animated Character Graphic Images

A. Hardware design

When processing animated images, the animated character graphic image visual communication system needs to employ a 3D engine, and the renderer is the essential component of the 3D engine [10]. The renderer employs a rasterization technique with an appropriate hardware architecture for rendering and is constructed based on the underlying graphics application programming interface (API) [11]. The two most widely used APIs are DirectX and OpenGL. The renderer motherboard must be designed, as shown in Figure 1, in order to adapt the APIs.

The model Huainan X79-8D dual-way large board was chosen to adapt to the renderer, the version is E-ATX, the number of memory slots on this motherboard is 8x DDR3, supporting 1 866/1 600/1 333 MHz memory [12]. Its SATA interface configuration is SATA 6 GB/s x 2 and SATA 3 GB/s x 4. The graphics card slot configuration is PCI-E 3.0 x 16 x 2. 24-pin power connector x 1. 8-pin power connector x 2. The number of power phases is 7+7 phases, the PCB thickness is 10 layers, and the graphics card is selected with an ALC887 7.1 graphics chip [13]. It is possible to make computer-generated graphics of animated figures appear hand-drawn using the renderer motherboard described above, optimising the visual communication experience and completing the hardware design of the graphic image visual communication system [14].

B. Software design

1) Introduction of the Sobel edge operator

The margins of an animated character’s graphic can alter significantly during the outline phase. It is necessary to introduce the Sobel edge operator [15] in order to more accurately detect the image edge data. Let the grey scale function of the animated character graphic image be \(f\left( {x,y} \right)\) , where \(x,y\) is the horizontal and vertical coordinate where the pixel is located and \({G_x}\) denotes the gradient in the x-direction, we can obtain:

\[\begin{aligned} \label{e1} {G_x} = f\left( {x – 1,y – 1} \right) – f\left( {x – 1,y + 1} \right) + f\left( {x + 1,y – 1} \right) + f\left( {x + 1,y + 1} \right). \end{aligned}\tag{1}\]

The 3 x 3 gradient matrix in the grey scale function can be obtained from the corresponding pixel weighting factors in the gradient calculation Eq. (2):

\[\label{e2} \left[\begin{array}{ccc} -1 & 0 & 1 \\ -2 & 0 & 2 \\ -1 & 0 & 1 \end{array}\right]\left[\begin{array}{ccc} -1 & -2 & -1 \\ 0 & 0 & 0 \\ 1 & 2 & 1 \end{array}\right].\tag{2}\]

The system in this study chooses a threshold of 130 for discrimination [16]. When utilising the Sobel operator for edge recognition, the points with a gradient magnitude value surpassing a particular threshold are set as edge points. The following relationship exists between the recognition speed of the image’s contours and the pixel position:

\[\label{e3} {v_{k + 1}} = {v_k} + {c_1}{r_1}\left( {Pbes{t_k} – {x_k}} \right) + {c_2}{r_2}\left( {Gbes{t_k} – {x_k}} \right).\tag{3}\]

Because of its relatively coarse edge image at the single pixel level, the image needs to be refined, and the best available refinement technique is the Hilditch code [17]. The first step is to define the ’display operator rendering’ program: global proc resulotion Gate (), after some major steps to complete the animation character recognition and refinement of the edges of the graphical image.

2) Motion capture module design

The primary goal of an animated character’s motion simulation is to govern the character’s motion behaviour, which can vary greatly in complexity [18]. Figure 2 displays a typical model for regulating the behaviour of animated characters.

Based on the surroundings, behavioural decisions are made, and motion capture file production is controlled throughout this time. As illustrated in Figure 3, there is interaction between the file node, the file information node, and the motion capture module when handling the files of the motion capture module [19].

A file namespace exists within the external name of the motion capture file. The file namespace forms a mapping table, see Table 1.

Table 1: File namespace table
Parent namespace ID Namespace Namespace ID Parent namespace ID Namespace Namespace ID
01 02 Render3 06
01 File 02 02 Render4 07
01 other 03 02 Render5 08
02 Render1 04 01 File 09
02 Render2 05

The name of the namespace, its identification, and the identifier of the parent namespace pointing to the higher-level namespace make up Table 1’s three sections. The three components of the namespace identification each individually identify a namespace, and the entries in the directory create a linked table structure for decentralised storage for the parent namespace identification that points to the higher level namespace. Thus, the service of rendering cluster file information enhances the consistency of animated character movement during animation production as well as the dependability of information storage. This completes the design of the visual communication system for the graphic images of animated characters in the virtual world.

3. Visual Communication Partitioning Model

The image seen in the human eye is stored during actual storage mainly as a continuous image, and the corresponding two-dimensional image vector expression arithmetic is obtained, i.e.

\[\label{e4} g\left( {x,y} \right) = \left\{ {{g_r}\left( {{x_i},{y_i}} \right),{g_g}\left( {{x_i},{y_i}} \right),{g_b}\left( {{x_i},{y_i}} \right)} \right\}.\tag{4}\]

Where:\({g_r}\left( {{x_i},{y_i}} \right)\) is the vector function of the red primary color in the image signal; \({{g_g}\left( {{x_i},{y_i}} \right)}\) is the vector function of the green primary color in the image signal; \({{g_b}\left( {{x_i},{y_i}} \right)}\) is the vector function of the blue primary color in the image signal; \(g\left( {x,y} \right)\) is the signal expression of the original image; \({{x_i}}\) is the light intensity function in the image; \({{y_i}}\) is the spatial coordinate function in the image. Different primary colors correspond to different color channels in certain differences. For computer graphics images, when the color saturation is 0, it can be fully \({g_r}\left( {{x_i},{y_i}} \right) = {g_g}\left( {{x_i},{y_i}} \right) = {g_b}\left( {{x_i},{y_i}} \right)\) demonstrated . In this case, the computer graphics image appears in a grey state, i.e. a black and white image. With the help of this colour representation, an image partitioning model may be effectively built. In order to ensure that all components are held in the clustering centre safely and securely, the visual communication partitions are first unified and pooled [20]. Second, the combined visual data is used to scientifically regulate and control the distance between each visual communication partition and the cluster, and the final calculation results are immediately entered into the classifier using a histogram[21]. In order to minimise quantization mistakes, each clustering centre is encoded using coding. Additionally, in order to continuously improve the ability to discriminate discrete information, the initial discretization method is modified after encoding to a weighted processing method, which can considerably enhance the performance of picture recognition [22].

4. Experimental

A. Establishing the data set and setting the experimental parameters

An experimental comparison was utilised to thoroughly test the performance of the approach in order to correctly confirm the validity and reliability of the visual communication design method. First, the dataset’s parameters were accurately established in accordance with the created graphic picture dataset. Next, four distinct types of images were found, 36 photos that resembled them were chosen, and the remaining four types of images were set. (1) Type A images. It is mainly used for a comprehensive collection of building decoration images. (2) Category B images. It is mainly used for a comprehensive collection of pet images. (3) Category C image. It is mainly used for a comprehensive collection of images of everyday household items. (4) Class D images. It is mostly utilised for an extensive library of pictures of natural settings. To create a sample of 10,000 photos, the aforementioned 4 photographs—a total of 40 images—were distributed at random among the 9960 images. This sample served as the data set. 2000 photos were taken from these datasets using a random sampling method, 2000 more were used as training samples, and the remaining 8000 images were utilised as test samples. During the precise characterization of the original image features, attention is paid to extracting the image scales using subsets and dividing them so that they are divided into \(\sqrt {2R}\) image blocks of radius, with an value usually taken as 16. In addition, all feature interval pixel data needs to be set to a uniform 2, at which point the area of all image scales is calculated as \(\sqrt {2R} \times \sqrt {2R}\) . If the feature distribution fully meets the criteria and requirements of the Gaussian distribution, the fixed scale can be uniformly set to a spatial pyramid scheme, and the program tool used is configured on a designated Personal Computer (PC) to achieve accurate training of the training samples.

B. Evaluation of experimental parameters

In order to determine the effects of parameter changes on other indicators, as shown in Figure 2, it is important to concentrate on testing and analysing the method’s pertinent parameters in the text, such as the dictionary space capacity parameter of the feature space. Figure 4 illustrates how the algorithm’s accuracy tends to climb as dictionary space capacity increases. The accuracy of the algorithm tends to slow down as the dictionary size reaches 1024 MB and climbs as the number of feature space distributions does. The algorithm’s accuracy reaches its peak when there are 256 MB of feature space distributions. It is evident that the picture recognition accuracy rate will alter concurrently with changes in the dictionary space capacity and the quantity of feature space distributions. When these two parameters are zero, the image recognition accuracy of both reaches 67% and 73% respectively, indicating that the image recognition accuracy of both reaches the lowest value [23]. Retaining as much information as possible will considerably enhance the saturation of the feature space, ensuring that it can always operate at its peak performance. At this stage, the equivalent number of feature space distributions should be set to 256 MB if the dictionary space capacity is uniformly set to 1024 MB. In order to successfully balance image recognition accuracy and recognition efficiency as the image recognition accuracy increases and reaches its maximum value, it is necessary to evaluate and judge the optimal communication of the weight parameters, resulting in the spatially clustered image as shown in Figure 5.

As observed in Figure 5, the accuracy of the method will exhibit a “few” shakes as the weight parameter increases when the dictionary space and the number of feature space distribution are 1024 MB and 256 MB, respectively. The image identification accuracy increases to 89.59% when the weight parameter is set to 0.34.

C. Comparison of image recognition results

Four types of images are chosen from the determined test set and united as the iconic samples for image recognition. The four types of image samples are then accurately detected and recognised in 800 test set images to obtain the image recognition results of the three methods. This allows for a better comparison of the image recognition effect between this method and the conventional PCA Net, linear discrimination method, and computer vision method.

Table 2: File namespace table
Usage method Class A image recognition accuracy Class B image recognition accuracy Class C image recognition accuracy Class D image recognition accuracy
Usage method Class A image recognition accuracy Class B image recognition accuracy Class C image recognition accuracy Class D image recognition accuracy
Method of this article 82.37 83.37 87.14 85.76
PCANET and Linear Discriminant Methods 75.38 76.65 81.19 74.54
Computer Vision Methods 73.76 72.87 79.30 73.75

Table 2 shows that when compared to the other 3 classes of images, Class C images have a reasonably good recognition accuracy. It can be observed that, in the context of the application of computer graphics image processing technology, building visual communication instances with the assistance of the visual communication design method suggested in this study can improve the ease of recognition of simple images.

D. Character movement effects

Images were required for comparison in both the system’s construction and the presentation of the imaging effect. An animated character was utilised as the experimental item in the study, and screenshots were acquired at four randomly chosen time points to capture the character’s motion as it was created utilising the planned system and the existing visual communication system, respectively [24]. The motion images created by the current system were compared to those created by the system in this research in order to more intuitively depict the effectiveness of the system. The experimental findings are displayed in Figure 6.

To facilitate observation, the key motion nodes of the three-dimensional animated character are replaced using dots, with the four points of the left limb represented by 1 to 4, and the four points of the right limb represented by 5 to 8 respectively. Figure 6 illustrates how the designed system’s animated character moves in a cohesive manner. Lifting the left foot is the character’s first action. The second action is stepping forward with the left foot getting bigger. The third action is landing on the left foot. The fourth action is lifting the right foot and repeating the left foot action. The fifth action is the landing of the left foot. The posture of the limbs during the walking motion produces a continuous effect comparable to the movement of a real person. A succession of walking movements are portrayed in a cogent manner. In contrast, the old approach merely captures displacement images of animated characters, with no change in limb movement [25]. This proves that the method created for animated character graphic visual communication in a virtual reality setting is more effective at creating a continuum of motion in animated character graphic images [26].

E. Image Communication Effect

In the colour design of graphic pictures, the RGB three primary colours are employed, whereas the CMYK four main colours are used in the design of printed works. Red, Green, and Blue (or RGB) are overlayed as pixel values on the display. Three main colours are combined with 256 brightness levels in RGB, which results in 1678 distinct colours. Cyan, magenta, yellow, and black make up the four colours of CMYK. From 0% to 100%, the colour changes. various display devices may produce various colour representations. As a result, it is typically set to 216 web security colours, which include 210 colours and 6 non-colors, in order to create visual homogeneity. The representation method is in hexadecimal format to avoid distortion due to different display conditions.

The three graphics in Figure 7 were created utilising graphic image processing technologies as design pieces for visual communication. Through the processing of graphic components, picture elements, and colours in the design of visual communication, the three works successfully enhance the visual communication effect.

5. Conclusion

This research provides a novel visual communication method for the graphic pictures of animated characters in a virtual reality setting. The purpose of simulation experiments is to create the animated character’s motion utilising the designed and existing systems, respectively. The effectiveness of the designed system is confirmed by an analysis of the animated character’s motion grasping comparison chart, which reveals that the animated character’s motion movements created by the designed system are more cogent and that the continuous effect of the position of the limbs in the motion is similar to that of a real person.

Funding

This study received funding support from the following sources:2023 Guangdong Construction Vocational and Technical College Campus level Research and Innovation Team Project: Digital Media Technology Research Team, Project Number: CXTD202307.

References

  1. Son, E. (2022). Visual, auditory, and psychological elements of the characters and images in the scenes of the animated film, inside out. Quarterly Review of Film and Video, 39(1), 225-240.
  2. Zhang, Z., Wu, Y., Pan, Z., Li, W., & Su, Z. (2022). A novel animation authoring framework for the virtual teacher performing experiment in mixed reality. Computer Applications in Engineering Education, 30(2), 550-563.
  3. Han, J., Zheng, Q., & Ding, Y. (2021). Lost in virtual reality? cognitive load in high immersive vr environments. Journal of Advances in Information Technology(4), 12.
  4. Zhu, W., Guo, S., & Zhao, J. (2021). Planning participants’ preferential differences under immersive virtual reality and conventional representations: an experiment of street renewal:. Environment and Planning B: Urban Analytics and City Science, 48(7), 1755-1769.
  5. Zhao, J., & Allison, R. (2021). The role of binocular vision in avoiding virtual obstacles while walking. IEEE Transactions on Visualization and Computer Graphics, 27(7), 3277-3288.
  6. Tabaki, N. (2021). Into the nebula: embodied perception of scenography in virtual environments. Performance Research, 26(3), 9-16.
  7. Nehmé, Y., Dupont, F., Farrugia, J. P., Callet, P. L., & Lavoué, G. (2021). Visual quality of 3d meshes with diffuse colors in virtual reality: subjective and objective evaluation. IEEE Transactions on Visualization and Computer Graphics, 27(3), 2202-2219.
  8. Lee, I. J. (2020). Kinect-for-windows with augmented reality in an interactive roleplay system for children with an autism spectrum disorder. Interactive Learning Environments(2), 1-17.
  9. Kellems, R. O., Charlton, C., Kversy, K. S., & Gyori, M. (2020). Exploring the use of virtual characters (avatars), live animation, and augmented reality to teach social skills to individuals with autism. Multimodal Technologies and Interaction, 4(3), 986-992.
  10. Radulescu, A., Opheusden, B. V., Callaway, F., Griffiths, T., & Hillis, J. (2020). Modeling visual search in naturalistic virtual reality environments. Journal of Vision, 20(11), 1401.
  11. Lee, J. J., & Park, J. M. (2020). 3d mirrored object selection for occluded objects in virtual environments. IEEE Access, 8, 200259-200274.
  12. Ekawardhani, Y. A., Santosa, I., Ahmad, H. A., & Irfansyah, I. (2020). Modification of visual characters in indonesia animation film. Harmonia Journal of Arts Research and Education, 20(2), 167-175.
  13. Kim, H., Lee, E. C., Seo, Y., Im, D., & Lee, I. K. (2020). Character detection in animated movies using multi-style adaptation and visual attention. IEEE Transactions on Multimedia, PP(99), 1-1.
  14. Yu, Y., & Sun, Y. (2021). Research on visual communication graphic design information system based on computer simulation. Journal of Physics: Conference Series, 1952(2), 022032 (6pp).
  15. Gansen, S., & James, P. (2021). A graphic turn for canadian foreign policy: insights from systemism. Canadian Foreign Policy Journal, 27(3), 271-291.
  16. Zhou, F., Su, Q., & Mou, J. (2021). Understanding the effect of website logos as animated spokescharacters on the advertising: a lens of parasocial interaction relationship. Technology in Society, 65, 101571.
  17. Di, N., Stefano, T., Federica, S., Chiara, I., Daniela, V., & Tiziana, M., et al. (2022). Behind a digital mask: users’ subjective experience of animated characters and its effect on source credibility. Interacting with Computers(5), 5.
  18. Gharaibeh, I. H. (2021). Real-time sign languages character recognition. International Journal of Computer Applications in Technology, 65(1), 125-130.
  19. Snyder, M. N., & Mares, M. L. (2021). Preschoolers’ choices of television characters as sources of information: effects of character type, format, and topic domain. Journal of Experimental Child Psychology, 203, 105034.
  20. Wu, H., Deng, Y., Pan, J., Han, T., & Zhang, X. L. (2021). User capabilities in eyes-free spatial target acquisition in immersive virtual reality environments. Applied Ergonomics, 94(6), 103400.
  21. Onime, C., Uhomoibhi, J., Santachiara, M., & Wang, H. (2021). A reclassification of markers for mixed reality environments. The International Journal of Information and Learning Technology, 38(1), 161-173.
  22. Zhu, W., Guo, S., & Zhao, J. (2021). Planning participants’ preferential differences under immersive virtual reality and conventional representations: an experiment of street renewal:. Environment and Planning B: Urban Analytics and City Science, 48(7), 1755-1769.
  23. Huang, Y., Richter, E., Kleickmann, T., & Richter, D. (2023). Comparing video and virtual reality as tools for fostering interest and self-efficacy in classroom management: results of a pre-registered experiment. British Journal of Educational Technology, 54(2), 76-85.
  24. Zhou, J., Li, B., Zhang, D., Yuan, J., Zhang, W., Cai, Z., & Shi, J. (2023). UGIF-Net: An efficient fully guided information flow network for underwater image enhancement. IEEE Transactions on Geoscience and Remote Sensing, 1, 17.
  25. Ali, J., Shan, G., Gul, N., & Roh, B. H. (2023). An Intelligent Blockchain-based Secure Link Failure Recovery Framework for Software-defined Internet-of-Things. Journal of Grid Computing, 21(4), 57.
  26. Nazeer, S., Sultana, N., & Bonyah, E. Cycles and Paths Related Vertex-Equitable Graphs. Journal of Combinatorial Mathematics and Combinatorial Computing, 117, 15-24.
Related Articles
Cansu Aykut Kolay1, İsmail Hakkı Mirici2
1Hacettepe University Graduate School of Educational Sciences, Ankara, Turkey.
2Hacettepe University, Faculty of Education, Ankara, Turkey.
Shatha M. AlHosian1
1College of Business Adminisrtation, King Saud University, Saudi Arabia.
Mustafa N. Mnati1, Ahmed Salih Al-Khaleefa2, Mohammed Ahmed Jubair3, Rasha Abed Hussein4
1Department of electrical engineering, Faculty of Engineering, University of Misan, Misan, Iraq.
2Department of Physics, Faculty of Education, University of Misan, Misan, Iraq.
3Department of Computer Technical Engineering, College of Information Technology, Imam Ja’afar Al-Sadiq University, Iraq.
4Department Of Dentistry, Almanara University for Medical Science, Iraq.
Samirah Dunakhir, Mukhammad Idrus1
1Faculty of Economics and Business, Universitas Negeri Makassar, Indonesia.

Citation

Zhengyu Yang. Innovative Design of Visual Communication System for Animated Character Graphic Images in Virtual Reality Environment[J], Archives Des Sciences, Volume 74 , Issue 3, 2024. 13-18. DOI: https://doi.org/10.62227/as/74303.