Journal of Information Systems Engineering and Management

A Research on the Dynamization Effect of Brand Visual Identity Design: Mediated by Digital Information Smart Media
Peijie Yuan 1 *
More Detail
1 Ph.D candidate, Kyiv National University of Technologies and Design, Kyiv, Ukraine
* Corresponding Author
Research Article

Journal of Information Systems Engineering and Management, 2024 - Volume 9 Issue 1, Article No: 24153
https://doi.org/10.55267/iadt.07.14078

Published Online: 25 Jan 2024

Views: 433 | Downloads: 560

How to cite this article
APA 6th edition
In-text citation: (Yuan, 2024)
Reference: Yuan, P. (2024). A Research on the Dynamization Effect of Brand Visual Identity Design: Mediated by Digital Information Smart Media. Journal of Information Systems Engineering and Management, 9(1), 24153. https://doi.org/10.55267/iadt.07.14078
Vancouver
In-text citation: (1), (2), (3), etc.
Reference: Yuan P. A Research on the Dynamization Effect of Brand Visual Identity Design: Mediated by Digital Information Smart Media. J INFORM SYSTEMS ENG. 2024;9(1):24153. https://doi.org/10.55267/iadt.07.14078
AMA 10th edition
In-text citation: (1), (2), (3), etc.
Reference: Yuan P. A Research on the Dynamization Effect of Brand Visual Identity Design: Mediated by Digital Information Smart Media. J INFORM SYSTEMS ENG. 2024;9(1), 24153. https://doi.org/10.55267/iadt.07.14078
Chicago
In-text citation: (Yuan, 2024)
Reference: Yuan, Peijie. "A Research on the Dynamization Effect of Brand Visual Identity Design: Mediated by Digital Information Smart Media". Journal of Information Systems Engineering and Management 2024 9 no. 1 (2024): 24153. https://doi.org/10.55267/iadt.07.14078
Harvard
In-text citation: (Yuan, 2024)
Reference: Yuan, P. (2024). A Research on the Dynamization Effect of Brand Visual Identity Design: Mediated by Digital Information Smart Media. Journal of Information Systems Engineering and Management, 9(1), 24153. https://doi.org/10.55267/iadt.07.14078
MLA
In-text citation: (Yuan, 2024)
Reference: Yuan, Peijie "A Research on the Dynamization Effect of Brand Visual Identity Design: Mediated by Digital Information Smart Media". Journal of Information Systems Engineering and Management, vol. 9, no. 1, 2024, 24153. https://doi.org/10.55267/iadt.07.14078
ABSTRACT
The article utilizes the literature research method, case study method, and practical verification method. The article discusses brand visual identity and motion graphics design principles. The article outlines dynamic brand visual identity design trends that digital information and AI enable. It explains AI generative models like GAN and diffusion models that generate graphics and effects. Examples like Stable Diffusion and Midjourney show AI's potential for diverse, abstract visuals in motion graphics. AI could also enable interactive effects by combining with AR/VR. Overall, AI can empower dynamic, personalized graphic design and branding. Key points are that dynamic design brings interactivity and better conveys brand meaning. Brand visual design is diversifying, with core brand image and dynamic performance reinforcing each other. AI can boost efficiency, innovation, and meaning in dynamic design. Though mainstream, 2D branding remains relevant. The article highlights the future potential of AI in motion graphics and visual storytelling, as it can generate new interpretations and experiences.
KEYWORDS
REFERENCES
  • Babic, N., Pibernik, J., & Mrvac, N. (2008). Media study: Motion graphics. In 2008 50th International Symposium ELMAR (Vol. 2, pp. 499-502). IEEE.
  • Betancourt, M. (2020). The history of motion graphics. Rockwell, United States: Wildside Press LLC.
  • Borji, A. (2022). Generated faces in the wild: Quantitative comparison of stable diffusion, midjourney and dall-e 2. https://doi.org/10.48550/arXiv.2210.00586
  • Collopy, F. (2000). Color, form, and motion: Dimensions of a musical art of light. Leonardo, 33(5), 355-360.
  • Creswell, A., White, T., Dumoulin, V., Arulkumaran, K., Sengupta, B., & Bharath, A. A. (2018). Generative adversarial networks: An overview. IEEE signal processing magazine, 35(1), 53-65.
  • Du, H., Li, Z., Niyato, D., Kang, J., Xiong, Z., Huang, H., & Mao, S. (2023). Generative AI-aided optimization for AI-generated content (AIGC) services in edge networks. https://doi.org/10.48550/arXiv.2303.13052
  • Foxall, G. R., & Schrezenmaier, T. C. (2007). The Behavioral Economics of Consumer Brand Choice: Establishing a Methodology. In G. R. Foxall, J. M. Oliveira-Castro, V. K. James, & T. C. Schrezenmaier (Eds.), The Behavioral Economics of Brand Choice (pp. 100-124). London, UK: Palgrave Macmillan UK.
  • Geng, L. (2016). Study of the Motion Graphic Design at the Digital Age. In 2nd International Conference on Arts, Design and Contemporary Education (pp. 761-763). Paris, France: Atlantis Press.
  • Golombisky, K., & Hagen, R. (2013). White space is not your enemy: A beginner's guide to communicating visually through graphic, web & multimedia design. Taylor & Francis.
  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... Bengio, Y. (2020). Generative adversarial networks. Communications of the ACM, 63(11), 139-144.
  • Guan, C., Ding, D., & Guo, J. (2022, December). Web3.0: A Review And Research Agenda. In 2022 RIVF International Conference on Computing and Communication Technologies (RIVF) (pp. 653-658). IEEE.
  • Gurl, E. (2017). SWOT analysis: A theoretical review. http://dx.doi.org/10.17719/jisr.2017.1832
  • Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. Advances in neural information processing systems, 33, 6840-6851.
  • Hoberman, J. (1982). Disney Animation: The Illusion of Life. Film Comment, 18(1), 67.
  • Hoffman, J. E. (1975). Hierarchical stages in the processing of visual information. Perception & Psychophysics, 18, 348-354.
  • Kang, Y., Cai, Z., Tan, C. W., Huang, Q., & Liu, H. (2020). Natural language processing (NLP) in management research: A literature review. Journal of Management Analytics, 7(2), 139-172.
  • Kung, S. Y., & Hwang, J. N. (1998). Neural networks for intelligent multimedia processing. Proceedings of the IEEE, 86(6), 1244-1272.
  • Liu, N., Yu, R., & Zhang, Y. (2016). Effects of font size, stroke width, and character complexity on the legibility of Chinese characters. Human Factors and Ergonomics in Manufacturing & Service Industries, 26(3), 381-392.
  • Livio, M. (2002). The golden ratio and aesthetics. Plus Magazine, 22. Retrieved from https://plus.maths.org/issue22/features/golden/feat.pdf
  • Nichol, A. Q., & Dhariwal, P. (2021). Improved denoising diffusion probabilistic models. In International Conference on Machine Learning (pp. 8162-8171). PMLR.
  • PenLan. (2016). Intellectual Mediatization: Future Media Wave - A Report on New Media Development Trends (2016). International Journalism, 38(11), 19.
  • Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., ... Sutskever, I. (2021). Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (pp. 8748-8763). PMLR.
  • Rao, A. V., Rao, M. S., & Rao, J. D. P. (2023). Network Media Content Model in the Era of Smart Devices Check for updates. In Machine Learning and Big Data Analytics: 2nd International Conference on Machine Learning and Big Data Analytics-ICMLBDA, IIT Patna, India, March 2022 (Vol. 401, p. 341). Springer Nature.
  • Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022). High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 10684-10695). Retrieved from https://openaccess.thecvf.com/content/CVPR2022/papers/Rombach_High-Resolution_Image_Synthesis_With_Latent_Diffusion_Models_CVPR_2022_paper.pdf
  • Sun, C., Myers, A., Vondrick, C., Murphy, K., & Schmid, C. (2019). Videobert: A joint model for video and language representation learning. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 7464-7473). Retrieved from https://openaccess.thecvf.com/content_ICCV_2019/papers/Sun_VideoBERT_A_Joint_Model_for_Video_and_Language_Representation_Learning_ICCV_2019_paper.pdf
  • Tian, Y. Y. (2023). Transmedia Narrative Design Methods for Brand Visual Identity (Master's Thesis, Jingdezhen Ceramic University, Jingdezhen, China). http://doi.org/10.27191/d.cnki.gjdtc.2023.000251
  • Shi, T., & Wang, Z. (2022). An overview of pre-trained language models for natural language processing based on transformer. Information and Computers, 34(10), 5.
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. Retrieved from https://proceedings.neurips.cc/paper_files/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html
  • Wang, K., Gou, C., Duan, Y., Lin, Y., Zheng, X., & Wang, F. Y. (2017). Generative adversarial networks: introduction and outlook. IEEE/CAA Journal of Automatica Sinica, 4(4), 588-598.
  • Xu, Y., Chen, H., Zhang, W., & Hwang, J. N. (2019). Smart media transport: A burgeoning intelligent system for next generation multimedia convergence service over heterogeneous networks in China. IEEE MultiMedia, 26(3), 79-91.
  • Yang, F.-S., & Lee, T. (2023). The influence of AIGC technology on animation creation in the view of technological aesthetics. Modern Film Technology, 6, 50-54.
LICENSE
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.