밥이되든 죽이되던 프로그래밍
[모두를 위한 딥러닝]#3.Multi-variable Linear Regression 본문
- Hypothesis
- Cost function
- Gradient desecent
Cost function
Matrix multiplication
- 내적 또는 행(입력 feature)과 열(가중치)의 곱으로 나타냄
- instance마다 계산하는것이 아니라, instance 수대로 matrix 구현
=> ( instance 수 x 입력 feature 수 ) x (가중치 수(=입력 feature 수) x 출력 feature 수) = instance 수 x 출력 feature 수
Pytorch로 구현하는 Multi-variable linear regression
import pandas as pd
import matplotlib.pyplot as plt
import torch
import torch.optim as optim
데이터 로드
# First we load the entire CSV file into an m x 3
D = torch.tensor(pd.read_csv("linreg-multi-synthetic-2.csv", header=None).values, dtype=torch.float)
# We extract all rows and the first 2 columns, and then transpose it
x_dataset = D[:, 0:2].t()
# We extract all rows and the last column, and transpose it
y_dataset = D[:, 2].t()
# And make a convenient variable to remember the number of input columns
n = 2
모델 생성
### Model definition ###
# First we define the trainable parameters A and b
A = torch.randn((1, n), requires_grad=True)
b = torch.randn(1, requires_grad=True)
# Then we define the prediction model
def model(x_input):
return A.mm(x_input) + b
### Loss function definition ###
def loss(y_predicted, y_target):
return ((y_predicted - y_target)**2).sum()
모델 훈련
### Training the model ###
# Setup the optimizer object, so it optimizes a and b.
optimizer = optim.Adam([A, b], lr=0.1)
# Main optimization loop
for t in range(2000):
# Set the gradients to 0.
optimizer.zero_grad()
# Compute the current predicted y's from x_dataset
y_predicted = model(x_dataset)
# See how far off the prediction is
current_loss = loss(y_predicted, y_dataset)
# Compute the gradient of the loss with respect to A and b.
current_loss.backward()
# Update A and b accordingly.
optimizer.step()
print(f"t = {t}, loss = {current_loss}, A = {A.detach().numpy()}, b = {b.item()}")
'DeepLearning > 모두를 위한 딥러닝' 카테고리의 다른 글
[모두를 위한 딥러닝] 4. Logistic Classsification & Regression (0) | 2022.05.12 |
---|---|
[모두를 위한 딥러닝 ver.Pytorch] #1. 기본적인 ML 용어 정리 (0) | 2022.05.10 |
Comments