Using AI for work
I have started using copilot AI more and more for work. It seems that
if you have a general idea of the work, it can generate the code
pretty much at about 70%. It's not perfect - for example it almost
always messes up lambda functions but that 30% of problematic work one
can fix.
Meanwhile using other AIs also provide different different outcomes.
So turns out the training data and usage matters a lot. Anyway I think
human intuition and other hard won skills still matter and when AI
screws anything then knowing the nitty-gritty of things really comes
handy. But yeah it's going to mess with people's learning habits and
probably going to make more scatter-brained people. Learning is a hard
process and the inception of knowledge is a difficult thing. Looking
around we already find so many idiots with dirt for brains, I think
for disciplines like engineering where really complex knowledge needs
to be ingested, AI is going to probably tip over people into
un-learnables. But that's problem for the future.
Anyway another topic was if I should invest in a PC that could run
local AI models. But it' s costing more than 1Lac for decent usable
setup. And that's a bit too much money right now. I guess if I had
15-20L then spending couple of lacs on such a PC would not have been
all that difficult. But anyway right now my work is getting done via
the free models. So there's that. Regarding hardware I found out that
NVidia GPUs have the best AI support right now and they do provide
230+ TOPS performance right now. Comparing that to my inbuilt Vega 7
graphics's piddly 1.6 TOPS is an astronomical increase in performance.
But I have another arena to explore and that is small models. I see in
AI generated results that they could also perform certain tasks well
enough. So I'm going to explore that. Lets see if we find something
meaningful there. Cheers!!
if you have a general idea of the work, it can generate the code
pretty much at about 70%. It's not perfect - for example it almost
always messes up lambda functions but that 30% of problematic work one
can fix.
Meanwhile using other AIs also provide different different outcomes.
So turns out the training data and usage matters a lot. Anyway I think
human intuition and other hard won skills still matter and when AI
screws anything then knowing the nitty-gritty of things really comes
handy. But yeah it's going to mess with people's learning habits and
probably going to make more scatter-brained people. Learning is a hard
process and the inception of knowledge is a difficult thing. Looking
around we already find so many idiots with dirt for brains, I think
for disciplines like engineering where really complex knowledge needs
to be ingested, AI is going to probably tip over people into
un-learnables. But that's problem for the future.
Anyway another topic was if I should invest in a PC that could run
local AI models. But it' s costing more than 1Lac for decent usable
setup. And that's a bit too much money right now. I guess if I had
15-20L then spending couple of lacs on such a PC would not have been
all that difficult. But anyway right now my work is getting done via
the free models. So there's that. Regarding hardware I found out that
NVidia GPUs have the best AI support right now and they do provide
230+ TOPS performance right now. Comparing that to my inbuilt Vega 7
graphics's piddly 1.6 TOPS is an astronomical increase in performance.
But I have another arena to explore and that is small models. I see in
AI generated results that they could also perform certain tasks well
enough. So I'm going to explore that. Lets see if we find something
meaningful there. Cheers!!
Comments
Post a Comment