master's thesis
Cloud Computing Architecture

Ivan Balatinac (2015)
Sveučilište Josipa Jurja Strossmayera u Osijeku
Fakultet elektrotehnike, računarstva i informacijskih tehnologija Osijek
Zavod za programsko inženjerstvo
Katedra za programske jezike i sustave
Metadata
TitleArhitektura programiranja oblaka
AuthorIvan Balatinac
Mentor(s)Ivica Crnković (thesis advisor)
Abstract
U ovome radu objašnjen je koncept dubinskog učenja, kao naprednog postupka učenja različitih razina značajki podataka. Arhitektura dubinskih neuronskih mreža mnogo je složenija od arhitekture standardnih mreža, te sadrži minimalno dva skrivena sloja. Tri glavne arhitekture dubinskih mreža su naslagani autoenkoderi, deep belief networks i konvolucijske mreže. Naslagani autoenkoderi grade se od prethodno treniranih, običnih autoenkodera. Nakon postupka predtreniranja, cijela mreža se fino podešava čime je postupak treniranja završen. Traje višestruko duže u odnosu na treniranje standardnih mreže. Na provedenim eksperimentima ostvaren je značajan napredak u točnosti klasifikacije korištenjem naslaganih autoenkodera, u odnosu na standardne neuronske mreže. Potrebno je eksperimentirati sa različitim dimenzijama skrivenih slojeva, kako bi se pronašle optimalne postavke rada dubinskih mreža, za određeni skup podataka. Ključne riječi: autoenkoder, dubinsko učenje, naslagani autoenkoderi, neuronske mreže, pohlepno treniranje po slojevima
KeywordsKeywords: autoencoder deep learning stacked autoencoders neural networks greedy layer-wise training
Parallel title (English)Cloud Computing Architecture
Committee MembersIvica Crnković (committee member)
Goran Martinović (committee member)
Krešimir Nenadić (committee member)
GranterSveučilište Josipa Jurja Strossmayera u Osijeku
Fakultet elektrotehnike, računarstva i informacijskih tehnologija Osijek
Lower level organizational unitsZavod za programsko inženjerstvo
Katedra za programske jezike i sustave
PlaceOsijek
StateCroatia
Scientific field, discipline, subdisciplineTECHNICAL SCIENCES
Computing
Program Engineering
Study programme typeuniversity
Study levelgraduate
Study programmeGraduate University Study Programme in Computer Engineering
Academic title abbreviationmag.ing.comp.
Genremaster's thesis
Language Croatian
Defense date2015-09-21
Parallel abstract (English)
In this paper, the concept of deep learning as an advanced method for learning different levels of data features, was explained. Deep neural networks architecture is far more complex compared to standard network architectures, and it consists of minimum two hidden layers. Three main deep network architectures are stacked autoencoders, deep belief networks and convolutional networks. Stacked autoencoders are composed of normal autoencoders that were previously trained. After the pretraining step, the whole network is fine tuned, which finishes the training process. The training process duration is multiplied, regarding to standard networks training. A significant improvement in clasiffication accuracy was made in experiments using stacked autoencoders, in contrary to standard neural networks. Experimenting with different dimension of hidden layers is needed to find optimal settings for running deep networks, applied on specific data set.
Parallel keywords (Croatian)autoenkoder dubinsko učenje naslagani autoenkoderi neuronske mreže pohlepno treniranje po slojevima
Resource typetext
Access conditionOpen access
Terms of usehttp://rightsstatements.org/vocab/InC/1.0/
URN:NBNhttps://urn.nsk.hr/urn:nbn:hr:200:989836
CommitterAnka Ovničević